00:00:00.001 Started by upstream project "autotest-per-patch" build number 126220 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 23964 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.147 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.148 The recommended git tool is: git 00:00:00.148 using credential 00000000-0000-0000-0000-000000000002 00:00:00.150 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.177 Fetching changes from the remote Git repository 00:00:00.180 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.204 Using shallow fetch with depth 1 00:00:00.204 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.204 > git --version # timeout=10 00:00:00.227 > git --version # 'git version 2.39.2' 00:00:00.227 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.243 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.243 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/56/22956/10 # timeout=5 00:00:06.676 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.686 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.696 Checking out Revision d49304e16352441ae7eebb2419125dd094201f3e (FETCH_HEAD) 00:00:06.696 > git config core.sparsecheckout # timeout=10 00:00:06.707 > git read-tree -mu HEAD # timeout=10 00:00:06.722 > git checkout -f d49304e16352441ae7eebb2419125dd094201f3e # timeout=5 00:00:06.749 Commit message: "jenkins/jjb-config: Add ubuntu2404 to per-patch and nightly testing" 00:00:06.749 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.871 [Pipeline] Start of Pipeline 00:00:06.887 [Pipeline] library 00:00:06.889 Loading library shm_lib@master 00:00:06.889 Library shm_lib@master is cached. Copying from home. 00:00:06.903 [Pipeline] node 00:00:06.912 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.913 [Pipeline] { 00:00:06.923 [Pipeline] catchError 00:00:06.924 [Pipeline] { 00:00:06.935 [Pipeline] wrap 00:00:06.943 [Pipeline] { 00:00:06.950 [Pipeline] stage 00:00:06.951 [Pipeline] { (Prologue) 00:00:06.968 [Pipeline] echo 00:00:06.969 Node: VM-host-SM17 00:00:06.974 [Pipeline] cleanWs 00:00:06.981 [WS-CLEANUP] Deleting project workspace... 00:00:06.981 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.987 [WS-CLEANUP] done 00:00:07.161 [Pipeline] setCustomBuildProperty 00:00:07.241 [Pipeline] httpRequest 00:00:07.270 [Pipeline] echo 00:00:07.271 Sorcerer 10.211.164.101 is alive 00:00:07.278 [Pipeline] httpRequest 00:00:07.282 HttpMethod: GET 00:00:07.283 URL: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:07.283 Sending request to url: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:07.318 Response Code: HTTP/1.1 200 OK 00:00:07.318 Success: Status code 200 is in the accepted range: 200,404 00:00:07.319 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:32.258 [Pipeline] sh 00:00:32.539 + tar --no-same-owner -xf jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:32.557 [Pipeline] httpRequest 00:00:32.580 [Pipeline] echo 00:00:32.582 Sorcerer 10.211.164.101 is alive 00:00:32.591 [Pipeline] httpRequest 00:00:32.596 HttpMethod: GET 00:00:32.597 URL: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:32.598 Sending request to url: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:32.605 Response Code: HTTP/1.1 200 OK 00:00:32.606 Success: Status code 200 is in the accepted range: 200,404 00:00:32.607 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:01:32.218 [Pipeline] sh 00:01:32.498 + tar --no-same-owner -xf spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:01:35.787 [Pipeline] sh 00:01:36.067 + git -C spdk log --oneline -n5 00:01:36.067 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:01:36.067 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:01:36.067 2d30d9f83 accel: introduce tasks in sequence limit 00:01:36.067 2728651ee accel: adjust task per ch define name 00:01:36.067 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:36.088 [Pipeline] writeFile 00:01:36.105 [Pipeline] sh 00:01:36.412 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:36.472 [Pipeline] sh 00:01:36.749 + cat autorun-spdk.conf 00:01:36.749 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.749 SPDK_TEST_NVMF=1 00:01:36.749 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.749 SPDK_TEST_URING=1 00:01:36.749 SPDK_TEST_USDT=1 00:01:36.749 SPDK_RUN_UBSAN=1 00:01:36.749 NET_TYPE=virt 00:01:36.749 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.756 RUN_NIGHTLY=0 00:01:36.757 [Pipeline] } 00:01:36.770 [Pipeline] // stage 00:01:36.785 [Pipeline] stage 00:01:36.787 [Pipeline] { (Run VM) 00:01:36.801 [Pipeline] sh 00:01:37.080 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:37.080 + echo 'Start stage prepare_nvme.sh' 00:01:37.080 Start stage prepare_nvme.sh 00:01:37.080 + [[ -n 7 ]] 00:01:37.080 + disk_prefix=ex7 00:01:37.080 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:37.080 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:37.080 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:37.080 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.080 ++ SPDK_TEST_NVMF=1 00:01:37.080 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.080 ++ SPDK_TEST_URING=1 00:01:37.080 ++ SPDK_TEST_USDT=1 00:01:37.080 ++ SPDK_RUN_UBSAN=1 00:01:37.080 ++ NET_TYPE=virt 00:01:37.080 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.080 ++ RUN_NIGHTLY=0 00:01:37.080 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:37.080 + nvme_files=() 00:01:37.080 + declare -A nvme_files 00:01:37.080 + backend_dir=/var/lib/libvirt/images/backends 00:01:37.080 + nvme_files['nvme.img']=5G 00:01:37.080 + nvme_files['nvme-cmb.img']=5G 00:01:37.080 + nvme_files['nvme-multi0.img']=4G 00:01:37.080 + nvme_files['nvme-multi1.img']=4G 00:01:37.080 + nvme_files['nvme-multi2.img']=4G 00:01:37.080 + nvme_files['nvme-openstack.img']=8G 00:01:37.080 + nvme_files['nvme-zns.img']=5G 00:01:37.080 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:37.080 + (( SPDK_TEST_FTL == 1 )) 00:01:37.080 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:37.080 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:37.080 + for nvme in "${!nvme_files[@]}" 00:01:37.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:37.080 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.080 + for nvme in "${!nvme_files[@]}" 00:01:37.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:37.080 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.080 + for nvme in "${!nvme_files[@]}" 00:01:37.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:37.080 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:37.080 + for nvme in "${!nvme_files[@]}" 00:01:37.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:37.080 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.080 + for nvme in "${!nvme_files[@]}" 00:01:37.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:37.080 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.080 + for nvme in "${!nvme_files[@]}" 00:01:37.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:37.080 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.080 + for nvme in "${!nvme_files[@]}" 00:01:37.080 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:37.080 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.080 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:37.080 + echo 'End stage prepare_nvme.sh' 00:01:37.080 End stage prepare_nvme.sh 00:01:37.092 [Pipeline] sh 00:01:37.372 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:37.372 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:01:37.372 00:01:37.372 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:37.372 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:37.372 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:37.372 HELP=0 00:01:37.372 DRY_RUN=0 00:01:37.372 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:37.372 NVME_DISKS_TYPE=nvme,nvme, 00:01:37.372 NVME_AUTO_CREATE=0 00:01:37.372 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:37.372 NVME_CMB=,, 00:01:37.372 NVME_PMR=,, 00:01:37.372 NVME_ZNS=,, 00:01:37.372 NVME_MS=,, 00:01:37.372 NVME_FDP=,, 00:01:37.372 SPDK_VAGRANT_DISTRO=fedora38 00:01:37.372 SPDK_VAGRANT_VMCPU=10 00:01:37.372 SPDK_VAGRANT_VMRAM=12288 00:01:37.372 SPDK_VAGRANT_PROVIDER=libvirt 00:01:37.372 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:37.372 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:37.372 SPDK_OPENSTACK_NETWORK=0 00:01:37.372 VAGRANT_PACKAGE_BOX=0 00:01:37.372 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:37.372 FORCE_DISTRO=true 00:01:37.372 VAGRANT_BOX_VERSION= 00:01:37.372 EXTRA_VAGRANTFILES= 00:01:37.372 NIC_MODEL=e1000 00:01:37.372 00:01:37.372 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:37.373 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:40.658 Bringing machine 'default' up with 'libvirt' provider... 00:01:40.658 ==> default: Creating image (snapshot of base box volume). 00:01:40.916 ==> default: Creating domain with the following settings... 00:01:40.916 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721062230_2c43f09958df8bbcb70e 00:01:40.916 ==> default: -- Domain type: kvm 00:01:40.916 ==> default: -- Cpus: 10 00:01:40.916 ==> default: -- Feature: acpi 00:01:40.916 ==> default: -- Feature: apic 00:01:40.916 ==> default: -- Feature: pae 00:01:40.916 ==> default: -- Memory: 12288M 00:01:40.916 ==> default: -- Memory Backing: hugepages: 00:01:40.916 ==> default: -- Management MAC: 00:01:40.916 ==> default: -- Loader: 00:01:40.916 ==> default: -- Nvram: 00:01:40.916 ==> default: -- Base box: spdk/fedora38 00:01:40.916 ==> default: -- Storage pool: default 00:01:40.916 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721062230_2c43f09958df8bbcb70e.img (20G) 00:01:40.916 ==> default: -- Volume Cache: default 00:01:40.916 ==> default: -- Kernel: 00:01:40.916 ==> default: -- Initrd: 00:01:40.916 ==> default: -- Graphics Type: vnc 00:01:40.916 ==> default: -- Graphics Port: -1 00:01:40.916 ==> default: -- Graphics IP: 127.0.0.1 00:01:40.916 ==> default: -- Graphics Password: Not defined 00:01:40.916 ==> default: -- Video Type: cirrus 00:01:40.916 ==> default: -- Video VRAM: 9216 00:01:40.916 ==> default: -- Sound Type: 00:01:40.916 ==> default: -- Keymap: en-us 00:01:40.916 ==> default: -- TPM Path: 00:01:40.916 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:40.916 ==> default: -- Command line args: 00:01:40.916 ==> default: -> value=-device, 00:01:40.916 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:40.916 ==> default: -> value=-drive, 00:01:40.916 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:40.916 ==> default: -> value=-device, 00:01:40.916 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.916 ==> default: -> value=-device, 00:01:40.916 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:40.916 ==> default: -> value=-drive, 00:01:40.916 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:40.916 ==> default: -> value=-device, 00:01:40.916 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.916 ==> default: -> value=-drive, 00:01:40.916 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:40.916 ==> default: -> value=-device, 00:01:40.916 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.916 ==> default: -> value=-drive, 00:01:40.916 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:40.916 ==> default: -> value=-device, 00:01:40.916 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.174 ==> default: Creating shared folders metadata... 00:01:41.174 ==> default: Starting domain. 00:01:42.550 ==> default: Waiting for domain to get an IP address... 00:02:00.697 ==> default: Waiting for SSH to become available... 00:02:00.697 ==> default: Configuring and enabling network interfaces... 00:02:03.232 default: SSH address: 192.168.121.174:22 00:02:03.232 default: SSH username: vagrant 00:02:03.232 default: SSH auth method: private key 00:02:05.131 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:13.253 ==> default: Mounting SSHFS shared folder... 00:02:14.187 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:14.187 ==> default: Checking Mount.. 00:02:15.563 ==> default: Folder Successfully Mounted! 00:02:15.563 ==> default: Running provisioner: file... 00:02:16.130 default: ~/.gitconfig => .gitconfig 00:02:16.697 00:02:16.697 SUCCESS! 00:02:16.697 00:02:16.697 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:16.697 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:16.697 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:16.697 00:02:16.707 [Pipeline] } 00:02:16.724 [Pipeline] // stage 00:02:16.733 [Pipeline] dir 00:02:16.734 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:02:16.736 [Pipeline] { 00:02:16.750 [Pipeline] catchError 00:02:16.752 [Pipeline] { 00:02:16.766 [Pipeline] sh 00:02:17.044 + vagrant ssh-config --host vagrant 00:02:17.044 + sed -ne /^Host/,$p 00:02:17.044 + tee ssh_conf 00:02:20.382 Host vagrant 00:02:20.382 HostName 192.168.121.174 00:02:20.382 User vagrant 00:02:20.382 Port 22 00:02:20.382 UserKnownHostsFile /dev/null 00:02:20.382 StrictHostKeyChecking no 00:02:20.382 PasswordAuthentication no 00:02:20.382 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:20.382 IdentitiesOnly yes 00:02:20.382 LogLevel FATAL 00:02:20.382 ForwardAgent yes 00:02:20.382 ForwardX11 yes 00:02:20.382 00:02:20.396 [Pipeline] withEnv 00:02:20.398 [Pipeline] { 00:02:20.415 [Pipeline] sh 00:02:20.693 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:20.693 source /etc/os-release 00:02:20.693 [[ -e /image.version ]] && img=$(< /image.version) 00:02:20.693 # Minimal, systemd-like check. 00:02:20.693 if [[ -e /.dockerenv ]]; then 00:02:20.693 # Clear garbage from the node's name: 00:02:20.693 # agt-er_autotest_547-896 -> autotest_547-896 00:02:20.693 # $HOSTNAME is the actual container id 00:02:20.693 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:20.693 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:20.693 # We can assume this is a mount from a host where container is running, 00:02:20.693 # so fetch its hostname to easily identify the target swarm worker. 00:02:20.693 container="$(< /etc/hostname) ($agent)" 00:02:20.693 else 00:02:20.693 # Fallback 00:02:20.693 container=$agent 00:02:20.693 fi 00:02:20.693 fi 00:02:20.693 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:20.693 00:02:20.705 [Pipeline] } 00:02:20.726 [Pipeline] // withEnv 00:02:20.734 [Pipeline] setCustomBuildProperty 00:02:20.750 [Pipeline] stage 00:02:20.753 [Pipeline] { (Tests) 00:02:20.772 [Pipeline] sh 00:02:21.052 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:21.067 [Pipeline] sh 00:02:21.345 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:21.366 [Pipeline] timeout 00:02:21.366 Timeout set to expire in 30 min 00:02:21.368 [Pipeline] { 00:02:21.388 [Pipeline] sh 00:02:21.667 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:22.233 HEAD is now at a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:02:22.247 [Pipeline] sh 00:02:22.525 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:22.797 [Pipeline] sh 00:02:23.076 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:23.094 [Pipeline] sh 00:02:23.370 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:23.370 ++ readlink -f spdk_repo 00:02:23.370 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:23.370 + [[ -n /home/vagrant/spdk_repo ]] 00:02:23.370 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:23.370 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:23.370 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:23.370 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:23.370 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:23.370 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:23.370 + cd /home/vagrant/spdk_repo 00:02:23.370 + source /etc/os-release 00:02:23.370 ++ NAME='Fedora Linux' 00:02:23.370 ++ VERSION='38 (Cloud Edition)' 00:02:23.370 ++ ID=fedora 00:02:23.370 ++ VERSION_ID=38 00:02:23.370 ++ VERSION_CODENAME= 00:02:23.370 ++ PLATFORM_ID=platform:f38 00:02:23.370 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:23.370 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:23.370 ++ LOGO=fedora-logo-icon 00:02:23.370 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:23.370 ++ HOME_URL=https://fedoraproject.org/ 00:02:23.370 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:23.370 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:23.370 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:23.370 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:23.370 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:23.370 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:23.370 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:23.370 ++ SUPPORT_END=2024-05-14 00:02:23.370 ++ VARIANT='Cloud Edition' 00:02:23.370 ++ VARIANT_ID=cloud 00:02:23.370 + uname -a 00:02:23.370 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:23.370 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:23.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:23.935 Hugepages 00:02:23.935 node hugesize free / total 00:02:23.935 node0 1048576kB 0 / 0 00:02:23.935 node0 2048kB 0 / 0 00:02:23.935 00:02:23.935 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:23.935 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:23.935 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:23.935 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:23.935 + rm -f /tmp/spdk-ld-path 00:02:23.936 + source autorun-spdk.conf 00:02:23.936 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.936 ++ SPDK_TEST_NVMF=1 00:02:23.936 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:23.936 ++ SPDK_TEST_URING=1 00:02:23.936 ++ SPDK_TEST_USDT=1 00:02:23.936 ++ SPDK_RUN_UBSAN=1 00:02:23.936 ++ NET_TYPE=virt 00:02:23.936 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.936 ++ RUN_NIGHTLY=0 00:02:23.936 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:23.936 + [[ -n '' ]] 00:02:23.936 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:23.936 + for M in /var/spdk/build-*-manifest.txt 00:02:23.936 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:23.936 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.936 + for M in /var/spdk/build-*-manifest.txt 00:02:23.936 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:23.936 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.936 ++ uname 00:02:23.936 + [[ Linux == \L\i\n\u\x ]] 00:02:23.936 + sudo dmesg -T 00:02:24.195 + sudo dmesg --clear 00:02:24.195 + dmesg_pid=5102 00:02:24.195 + sudo dmesg -Tw 00:02:24.195 + [[ Fedora Linux == FreeBSD ]] 00:02:24.195 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:24.195 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:24.195 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:24.195 + [[ -x /usr/src/fio-static/fio ]] 00:02:24.195 + export FIO_BIN=/usr/src/fio-static/fio 00:02:24.195 + FIO_BIN=/usr/src/fio-static/fio 00:02:24.195 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:24.195 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:24.195 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:24.195 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:24.195 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:24.195 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:24.195 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:24.195 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:24.195 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:24.195 Test configuration: 00:02:24.195 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.195 SPDK_TEST_NVMF=1 00:02:24.195 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:24.195 SPDK_TEST_URING=1 00:02:24.195 SPDK_TEST_USDT=1 00:02:24.195 SPDK_RUN_UBSAN=1 00:02:24.195 NET_TYPE=virt 00:02:24.195 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:24.195 RUN_NIGHTLY=0 16:51:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:24.195 16:51:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:24.195 16:51:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:24.195 16:51:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:24.195 16:51:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.195 16:51:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.195 16:51:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.195 16:51:14 -- paths/export.sh@5 -- $ export PATH 00:02:24.195 16:51:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.195 16:51:14 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:24.195 16:51:14 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:24.195 16:51:14 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721062274.XXXXXX 00:02:24.195 16:51:14 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721062274.xzIYoo 00:02:24.195 16:51:14 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:24.195 16:51:14 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:24.195 16:51:14 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:24.195 16:51:14 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:24.196 16:51:14 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:24.196 16:51:14 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:24.196 16:51:14 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:24.196 16:51:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.196 16:51:14 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:24.196 16:51:14 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:24.196 16:51:14 -- pm/common@17 -- $ local monitor 00:02:24.196 16:51:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.196 16:51:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.196 16:51:14 -- pm/common@21 -- $ date +%s 00:02:24.196 16:51:14 -- pm/common@25 -- $ sleep 1 00:02:24.196 16:51:14 -- pm/common@21 -- $ date +%s 00:02:24.196 16:51:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721062274 00:02:24.196 16:51:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721062274 00:02:24.196 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721062274_collect-cpu-load.pm.log 00:02:24.196 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721062274_collect-vmstat.pm.log 00:02:25.143 16:51:15 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:25.143 16:51:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:25.143 16:51:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:25.143 16:51:15 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:25.143 16:51:15 -- spdk/autobuild.sh@16 -- $ date -u 00:02:25.143 Mon Jul 15 04:51:15 PM UTC 2024 00:02:25.143 16:51:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:25.144 v24.09-pre-209-ga95bbf233 00:02:25.144 16:51:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:25.144 16:51:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:25.144 16:51:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:25.144 16:51:15 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:25.144 16:51:15 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:25.144 16:51:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.144 ************************************ 00:02:25.144 START TEST ubsan 00:02:25.144 ************************************ 00:02:25.144 using ubsan 00:02:25.144 16:51:15 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:25.144 00:02:25.144 real 0m0.000s 00:02:25.144 user 0m0.000s 00:02:25.144 sys 0m0.000s 00:02:25.144 16:51:15 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:25.144 16:51:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:25.144 ************************************ 00:02:25.144 END TEST ubsan 00:02:25.144 ************************************ 00:02:25.402 16:51:15 -- common/autotest_common.sh@1142 -- $ return 0 00:02:25.402 16:51:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:25.402 16:51:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:25.402 16:51:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:25.402 16:51:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:25.402 16:51:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:25.402 16:51:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:25.402 16:51:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:25.402 16:51:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:25.402 16:51:15 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:25.402 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:25.402 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:25.969 Using 'verbs' RDMA provider 00:02:39.173 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:51.366 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:51.366 Creating mk/config.mk...done. 00:02:51.366 Creating mk/cc.flags.mk...done. 00:02:51.366 Type 'make' to build. 00:02:51.366 16:51:41 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:51.366 16:51:41 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:51.366 16:51:41 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:51.366 16:51:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.366 ************************************ 00:02:51.366 START TEST make 00:02:51.366 ************************************ 00:02:51.366 16:51:41 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:51.625 make[1]: Nothing to be done for 'all'. 00:03:01.647 The Meson build system 00:03:01.647 Version: 1.3.1 00:03:01.647 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:01.647 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:01.647 Build type: native build 00:03:01.647 Program cat found: YES (/usr/bin/cat) 00:03:01.647 Project name: DPDK 00:03:01.647 Project version: 24.03.0 00:03:01.647 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:01.647 C linker for the host machine: cc ld.bfd 2.39-16 00:03:01.647 Host machine cpu family: x86_64 00:03:01.647 Host machine cpu: x86_64 00:03:01.647 Message: ## Building in Developer Mode ## 00:03:01.647 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:01.647 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:01.647 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:01.647 Program python3 found: YES (/usr/bin/python3) 00:03:01.647 Program cat found: YES (/usr/bin/cat) 00:03:01.648 Compiler for C supports arguments -march=native: YES 00:03:01.648 Checking for size of "void *" : 8 00:03:01.648 Checking for size of "void *" : 8 (cached) 00:03:01.648 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:01.648 Library m found: YES 00:03:01.648 Library numa found: YES 00:03:01.648 Has header "numaif.h" : YES 00:03:01.648 Library fdt found: NO 00:03:01.648 Library execinfo found: NO 00:03:01.648 Has header "execinfo.h" : YES 00:03:01.648 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:01.648 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:01.648 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:01.648 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:01.648 Run-time dependency openssl found: YES 3.0.9 00:03:01.648 Run-time dependency libpcap found: YES 1.10.4 00:03:01.648 Has header "pcap.h" with dependency libpcap: YES 00:03:01.648 Compiler for C supports arguments -Wcast-qual: YES 00:03:01.648 Compiler for C supports arguments -Wdeprecated: YES 00:03:01.648 Compiler for C supports arguments -Wformat: YES 00:03:01.648 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:01.648 Compiler for C supports arguments -Wformat-security: NO 00:03:01.648 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:01.648 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:01.648 Compiler for C supports arguments -Wnested-externs: YES 00:03:01.648 Compiler for C supports arguments -Wold-style-definition: YES 00:03:01.648 Compiler for C supports arguments -Wpointer-arith: YES 00:03:01.648 Compiler for C supports arguments -Wsign-compare: YES 00:03:01.648 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:01.648 Compiler for C supports arguments -Wundef: YES 00:03:01.648 Compiler for C supports arguments -Wwrite-strings: YES 00:03:01.648 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:01.648 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:01.648 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:01.648 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:01.648 Program objdump found: YES (/usr/bin/objdump) 00:03:01.648 Compiler for C supports arguments -mavx512f: YES 00:03:01.648 Checking if "AVX512 checking" compiles: YES 00:03:01.648 Fetching value of define "__SSE4_2__" : 1 00:03:01.648 Fetching value of define "__AES__" : 1 00:03:01.648 Fetching value of define "__AVX__" : 1 00:03:01.648 Fetching value of define "__AVX2__" : 1 00:03:01.648 Fetching value of define "__AVX512BW__" : (undefined) 00:03:01.648 Fetching value of define "__AVX512CD__" : (undefined) 00:03:01.648 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:01.648 Fetching value of define "__AVX512F__" : (undefined) 00:03:01.648 Fetching value of define "__AVX512VL__" : (undefined) 00:03:01.648 Fetching value of define "__PCLMUL__" : 1 00:03:01.648 Fetching value of define "__RDRND__" : 1 00:03:01.648 Fetching value of define "__RDSEED__" : 1 00:03:01.648 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:01.648 Fetching value of define "__znver1__" : (undefined) 00:03:01.648 Fetching value of define "__znver2__" : (undefined) 00:03:01.648 Fetching value of define "__znver3__" : (undefined) 00:03:01.648 Fetching value of define "__znver4__" : (undefined) 00:03:01.648 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:01.648 Message: lib/log: Defining dependency "log" 00:03:01.648 Message: lib/kvargs: Defining dependency "kvargs" 00:03:01.648 Message: lib/telemetry: Defining dependency "telemetry" 00:03:01.648 Checking for function "getentropy" : NO 00:03:01.648 Message: lib/eal: Defining dependency "eal" 00:03:01.648 Message: lib/ring: Defining dependency "ring" 00:03:01.648 Message: lib/rcu: Defining dependency "rcu" 00:03:01.648 Message: lib/mempool: Defining dependency "mempool" 00:03:01.648 Message: lib/mbuf: Defining dependency "mbuf" 00:03:01.648 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:01.648 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:01.648 Compiler for C supports arguments -mpclmul: YES 00:03:01.648 Compiler for C supports arguments -maes: YES 00:03:01.648 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:01.648 Compiler for C supports arguments -mavx512bw: YES 00:03:01.648 Compiler for C supports arguments -mavx512dq: YES 00:03:01.648 Compiler for C supports arguments -mavx512vl: YES 00:03:01.648 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:01.648 Compiler for C supports arguments -mavx2: YES 00:03:01.648 Compiler for C supports arguments -mavx: YES 00:03:01.648 Message: lib/net: Defining dependency "net" 00:03:01.648 Message: lib/meter: Defining dependency "meter" 00:03:01.648 Message: lib/ethdev: Defining dependency "ethdev" 00:03:01.648 Message: lib/pci: Defining dependency "pci" 00:03:01.648 Message: lib/cmdline: Defining dependency "cmdline" 00:03:01.648 Message: lib/hash: Defining dependency "hash" 00:03:01.648 Message: lib/timer: Defining dependency "timer" 00:03:01.648 Message: lib/compressdev: Defining dependency "compressdev" 00:03:01.648 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:01.648 Message: lib/dmadev: Defining dependency "dmadev" 00:03:01.648 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:01.648 Message: lib/power: Defining dependency "power" 00:03:01.648 Message: lib/reorder: Defining dependency "reorder" 00:03:01.648 Message: lib/security: Defining dependency "security" 00:03:01.648 Has header "linux/userfaultfd.h" : YES 00:03:01.648 Has header "linux/vduse.h" : YES 00:03:01.648 Message: lib/vhost: Defining dependency "vhost" 00:03:01.648 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:01.648 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:01.648 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:01.648 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:01.648 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:01.648 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:01.648 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:01.648 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:01.648 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:01.648 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:01.648 Program doxygen found: YES (/usr/bin/doxygen) 00:03:01.648 Configuring doxy-api-html.conf using configuration 00:03:01.648 Configuring doxy-api-man.conf using configuration 00:03:01.648 Program mandb found: YES (/usr/bin/mandb) 00:03:01.648 Program sphinx-build found: NO 00:03:01.648 Configuring rte_build_config.h using configuration 00:03:01.648 Message: 00:03:01.648 ================= 00:03:01.648 Applications Enabled 00:03:01.648 ================= 00:03:01.648 00:03:01.648 apps: 00:03:01.648 00:03:01.648 00:03:01.648 Message: 00:03:01.648 ================= 00:03:01.648 Libraries Enabled 00:03:01.648 ================= 00:03:01.648 00:03:01.648 libs: 00:03:01.648 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:01.648 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:01.648 cryptodev, dmadev, power, reorder, security, vhost, 00:03:01.648 00:03:01.648 Message: 00:03:01.648 =============== 00:03:01.648 Drivers Enabled 00:03:01.648 =============== 00:03:01.648 00:03:01.648 common: 00:03:01.648 00:03:01.648 bus: 00:03:01.648 pci, vdev, 00:03:01.648 mempool: 00:03:01.648 ring, 00:03:01.648 dma: 00:03:01.648 00:03:01.648 net: 00:03:01.648 00:03:01.648 crypto: 00:03:01.648 00:03:01.648 compress: 00:03:01.648 00:03:01.648 vdpa: 00:03:01.648 00:03:01.648 00:03:01.648 Message: 00:03:01.648 ================= 00:03:01.648 Content Skipped 00:03:01.648 ================= 00:03:01.648 00:03:01.648 apps: 00:03:01.648 dumpcap: explicitly disabled via build config 00:03:01.648 graph: explicitly disabled via build config 00:03:01.648 pdump: explicitly disabled via build config 00:03:01.648 proc-info: explicitly disabled via build config 00:03:01.648 test-acl: explicitly disabled via build config 00:03:01.648 test-bbdev: explicitly disabled via build config 00:03:01.648 test-cmdline: explicitly disabled via build config 00:03:01.648 test-compress-perf: explicitly disabled via build config 00:03:01.648 test-crypto-perf: explicitly disabled via build config 00:03:01.648 test-dma-perf: explicitly disabled via build config 00:03:01.648 test-eventdev: explicitly disabled via build config 00:03:01.648 test-fib: explicitly disabled via build config 00:03:01.648 test-flow-perf: explicitly disabled via build config 00:03:01.648 test-gpudev: explicitly disabled via build config 00:03:01.648 test-mldev: explicitly disabled via build config 00:03:01.648 test-pipeline: explicitly disabled via build config 00:03:01.648 test-pmd: explicitly disabled via build config 00:03:01.648 test-regex: explicitly disabled via build config 00:03:01.648 test-sad: explicitly disabled via build config 00:03:01.648 test-security-perf: explicitly disabled via build config 00:03:01.648 00:03:01.648 libs: 00:03:01.648 argparse: explicitly disabled via build config 00:03:01.648 metrics: explicitly disabled via build config 00:03:01.648 acl: explicitly disabled via build config 00:03:01.648 bbdev: explicitly disabled via build config 00:03:01.648 bitratestats: explicitly disabled via build config 00:03:01.648 bpf: explicitly disabled via build config 00:03:01.648 cfgfile: explicitly disabled via build config 00:03:01.648 distributor: explicitly disabled via build config 00:03:01.648 efd: explicitly disabled via build config 00:03:01.648 eventdev: explicitly disabled via build config 00:03:01.648 dispatcher: explicitly disabled via build config 00:03:01.648 gpudev: explicitly disabled via build config 00:03:01.648 gro: explicitly disabled via build config 00:03:01.648 gso: explicitly disabled via build config 00:03:01.648 ip_frag: explicitly disabled via build config 00:03:01.648 jobstats: explicitly disabled via build config 00:03:01.648 latencystats: explicitly disabled via build config 00:03:01.648 lpm: explicitly disabled via build config 00:03:01.648 member: explicitly disabled via build config 00:03:01.648 pcapng: explicitly disabled via build config 00:03:01.648 rawdev: explicitly disabled via build config 00:03:01.648 regexdev: explicitly disabled via build config 00:03:01.648 mldev: explicitly disabled via build config 00:03:01.648 rib: explicitly disabled via build config 00:03:01.648 sched: explicitly disabled via build config 00:03:01.648 stack: explicitly disabled via build config 00:03:01.648 ipsec: explicitly disabled via build config 00:03:01.648 pdcp: explicitly disabled via build config 00:03:01.648 fib: explicitly disabled via build config 00:03:01.648 port: explicitly disabled via build config 00:03:01.648 pdump: explicitly disabled via build config 00:03:01.648 table: explicitly disabled via build config 00:03:01.649 pipeline: explicitly disabled via build config 00:03:01.649 graph: explicitly disabled via build config 00:03:01.649 node: explicitly disabled via build config 00:03:01.649 00:03:01.649 drivers: 00:03:01.649 common/cpt: not in enabled drivers build config 00:03:01.649 common/dpaax: not in enabled drivers build config 00:03:01.649 common/iavf: not in enabled drivers build config 00:03:01.649 common/idpf: not in enabled drivers build config 00:03:01.649 common/ionic: not in enabled drivers build config 00:03:01.649 common/mvep: not in enabled drivers build config 00:03:01.649 common/octeontx: not in enabled drivers build config 00:03:01.649 bus/auxiliary: not in enabled drivers build config 00:03:01.649 bus/cdx: not in enabled drivers build config 00:03:01.649 bus/dpaa: not in enabled drivers build config 00:03:01.649 bus/fslmc: not in enabled drivers build config 00:03:01.649 bus/ifpga: not in enabled drivers build config 00:03:01.649 bus/platform: not in enabled drivers build config 00:03:01.649 bus/uacce: not in enabled drivers build config 00:03:01.649 bus/vmbus: not in enabled drivers build config 00:03:01.649 common/cnxk: not in enabled drivers build config 00:03:01.649 common/mlx5: not in enabled drivers build config 00:03:01.649 common/nfp: not in enabled drivers build config 00:03:01.649 common/nitrox: not in enabled drivers build config 00:03:01.649 common/qat: not in enabled drivers build config 00:03:01.649 common/sfc_efx: not in enabled drivers build config 00:03:01.649 mempool/bucket: not in enabled drivers build config 00:03:01.649 mempool/cnxk: not in enabled drivers build config 00:03:01.649 mempool/dpaa: not in enabled drivers build config 00:03:01.649 mempool/dpaa2: not in enabled drivers build config 00:03:01.649 mempool/octeontx: not in enabled drivers build config 00:03:01.649 mempool/stack: not in enabled drivers build config 00:03:01.649 dma/cnxk: not in enabled drivers build config 00:03:01.649 dma/dpaa: not in enabled drivers build config 00:03:01.649 dma/dpaa2: not in enabled drivers build config 00:03:01.649 dma/hisilicon: not in enabled drivers build config 00:03:01.649 dma/idxd: not in enabled drivers build config 00:03:01.649 dma/ioat: not in enabled drivers build config 00:03:01.649 dma/skeleton: not in enabled drivers build config 00:03:01.649 net/af_packet: not in enabled drivers build config 00:03:01.649 net/af_xdp: not in enabled drivers build config 00:03:01.649 net/ark: not in enabled drivers build config 00:03:01.649 net/atlantic: not in enabled drivers build config 00:03:01.649 net/avp: not in enabled drivers build config 00:03:01.649 net/axgbe: not in enabled drivers build config 00:03:01.649 net/bnx2x: not in enabled drivers build config 00:03:01.649 net/bnxt: not in enabled drivers build config 00:03:01.649 net/bonding: not in enabled drivers build config 00:03:01.649 net/cnxk: not in enabled drivers build config 00:03:01.649 net/cpfl: not in enabled drivers build config 00:03:01.649 net/cxgbe: not in enabled drivers build config 00:03:01.649 net/dpaa: not in enabled drivers build config 00:03:01.649 net/dpaa2: not in enabled drivers build config 00:03:01.649 net/e1000: not in enabled drivers build config 00:03:01.649 net/ena: not in enabled drivers build config 00:03:01.649 net/enetc: not in enabled drivers build config 00:03:01.649 net/enetfec: not in enabled drivers build config 00:03:01.649 net/enic: not in enabled drivers build config 00:03:01.649 net/failsafe: not in enabled drivers build config 00:03:01.649 net/fm10k: not in enabled drivers build config 00:03:01.649 net/gve: not in enabled drivers build config 00:03:01.649 net/hinic: not in enabled drivers build config 00:03:01.649 net/hns3: not in enabled drivers build config 00:03:01.649 net/i40e: not in enabled drivers build config 00:03:01.649 net/iavf: not in enabled drivers build config 00:03:01.649 net/ice: not in enabled drivers build config 00:03:01.649 net/idpf: not in enabled drivers build config 00:03:01.649 net/igc: not in enabled drivers build config 00:03:01.649 net/ionic: not in enabled drivers build config 00:03:01.649 net/ipn3ke: not in enabled drivers build config 00:03:01.649 net/ixgbe: not in enabled drivers build config 00:03:01.649 net/mana: not in enabled drivers build config 00:03:01.649 net/memif: not in enabled drivers build config 00:03:01.649 net/mlx4: not in enabled drivers build config 00:03:01.649 net/mlx5: not in enabled drivers build config 00:03:01.649 net/mvneta: not in enabled drivers build config 00:03:01.649 net/mvpp2: not in enabled drivers build config 00:03:01.649 net/netvsc: not in enabled drivers build config 00:03:01.649 net/nfb: not in enabled drivers build config 00:03:01.649 net/nfp: not in enabled drivers build config 00:03:01.649 net/ngbe: not in enabled drivers build config 00:03:01.649 net/null: not in enabled drivers build config 00:03:01.649 net/octeontx: not in enabled drivers build config 00:03:01.649 net/octeon_ep: not in enabled drivers build config 00:03:01.649 net/pcap: not in enabled drivers build config 00:03:01.649 net/pfe: not in enabled drivers build config 00:03:01.649 net/qede: not in enabled drivers build config 00:03:01.649 net/ring: not in enabled drivers build config 00:03:01.649 net/sfc: not in enabled drivers build config 00:03:01.649 net/softnic: not in enabled drivers build config 00:03:01.649 net/tap: not in enabled drivers build config 00:03:01.649 net/thunderx: not in enabled drivers build config 00:03:01.649 net/txgbe: not in enabled drivers build config 00:03:01.649 net/vdev_netvsc: not in enabled drivers build config 00:03:01.649 net/vhost: not in enabled drivers build config 00:03:01.649 net/virtio: not in enabled drivers build config 00:03:01.649 net/vmxnet3: not in enabled drivers build config 00:03:01.649 raw/*: missing internal dependency, "rawdev" 00:03:01.649 crypto/armv8: not in enabled drivers build config 00:03:01.649 crypto/bcmfs: not in enabled drivers build config 00:03:01.649 crypto/caam_jr: not in enabled drivers build config 00:03:01.649 crypto/ccp: not in enabled drivers build config 00:03:01.649 crypto/cnxk: not in enabled drivers build config 00:03:01.649 crypto/dpaa_sec: not in enabled drivers build config 00:03:01.649 crypto/dpaa2_sec: not in enabled drivers build config 00:03:01.649 crypto/ipsec_mb: not in enabled drivers build config 00:03:01.649 crypto/mlx5: not in enabled drivers build config 00:03:01.649 crypto/mvsam: not in enabled drivers build config 00:03:01.649 crypto/nitrox: not in enabled drivers build config 00:03:01.649 crypto/null: not in enabled drivers build config 00:03:01.649 crypto/octeontx: not in enabled drivers build config 00:03:01.649 crypto/openssl: not in enabled drivers build config 00:03:01.649 crypto/scheduler: not in enabled drivers build config 00:03:01.649 crypto/uadk: not in enabled drivers build config 00:03:01.649 crypto/virtio: not in enabled drivers build config 00:03:01.649 compress/isal: not in enabled drivers build config 00:03:01.649 compress/mlx5: not in enabled drivers build config 00:03:01.649 compress/nitrox: not in enabled drivers build config 00:03:01.649 compress/octeontx: not in enabled drivers build config 00:03:01.649 compress/zlib: not in enabled drivers build config 00:03:01.649 regex/*: missing internal dependency, "regexdev" 00:03:01.649 ml/*: missing internal dependency, "mldev" 00:03:01.649 vdpa/ifc: not in enabled drivers build config 00:03:01.649 vdpa/mlx5: not in enabled drivers build config 00:03:01.649 vdpa/nfp: not in enabled drivers build config 00:03:01.649 vdpa/sfc: not in enabled drivers build config 00:03:01.649 event/*: missing internal dependency, "eventdev" 00:03:01.649 baseband/*: missing internal dependency, "bbdev" 00:03:01.649 gpu/*: missing internal dependency, "gpudev" 00:03:01.649 00:03:01.649 00:03:01.907 Build targets in project: 85 00:03:01.907 00:03:01.907 DPDK 24.03.0 00:03:01.907 00:03:01.907 User defined options 00:03:01.907 buildtype : debug 00:03:01.907 default_library : shared 00:03:01.907 libdir : lib 00:03:01.907 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:01.907 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:01.907 c_link_args : 00:03:01.907 cpu_instruction_set: native 00:03:01.907 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:01.907 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:01.907 enable_docs : false 00:03:01.907 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:01.907 enable_kmods : false 00:03:01.907 max_lcores : 128 00:03:01.908 tests : false 00:03:01.908 00:03:01.908 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:02.165 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:02.423 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:02.423 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:02.423 [3/268] Linking static target lib/librte_kvargs.a 00:03:02.423 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:02.423 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:02.423 [6/268] Linking static target lib/librte_log.a 00:03:02.989 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.989 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:02.989 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:03.247 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:03.247 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:03.247 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:03.247 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:03.247 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:03.247 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:03.505 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:03.505 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.505 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:03.505 [19/268] Linking static target lib/librte_telemetry.a 00:03:03.505 [20/268] Linking target lib/librte_log.so.24.1 00:03:03.763 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:03.763 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:03.763 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:04.021 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:04.021 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:04.021 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:04.021 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:04.280 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:04.280 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:04.280 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:04.280 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.280 [32/268] Linking target lib/librte_telemetry.so.24.1 00:03:04.280 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:04.280 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:04.538 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:04.538 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:04.538 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:05.106 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:05.106 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:05.106 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:05.106 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:05.106 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:05.106 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:05.106 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:05.106 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:05.106 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:05.365 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:05.365 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:05.365 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:05.624 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:05.883 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:05.883 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:05.883 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:06.141 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:06.141 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:06.141 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:06.141 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:06.141 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:06.400 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:06.400 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:06.400 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:06.659 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:06.659 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:06.917 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:06.917 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:06.917 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:06.917 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:07.176 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:07.176 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:07.434 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:07.434 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:07.434 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:07.434 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:07.693 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:07.693 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:07.693 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:07.952 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:07.952 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:07.952 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:07.952 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:08.210 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:08.210 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:08.210 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:08.468 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:08.468 [85/268] Linking static target lib/librte_eal.a 00:03:08.727 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:08.727 [87/268] Linking static target lib/librte_ring.a 00:03:08.727 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:08.727 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:08.987 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:08.987 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:08.987 [92/268] Linking static target lib/librte_mempool.a 00:03:08.987 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:08.987 [94/268] Linking static target lib/librte_rcu.a 00:03:09.247 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:09.247 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.247 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.505 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:09.505 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:09.505 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:09.763 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.763 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:09.763 [103/268] Linking static target lib/librte_mbuf.a 00:03:10.022 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.022 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.280 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:10.280 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.280 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:10.280 [109/268] Linking static target lib/librte_net.a 00:03:10.280 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.538 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:10.538 [112/268] Linking static target lib/librte_meter.a 00:03:10.538 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:10.796 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.796 [115/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.796 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:10.796 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:10.797 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:11.055 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.314 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:11.572 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:11.572 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:11.572 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:11.842 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:11.842 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:11.842 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:11.842 [127/268] Linking static target lib/librte_pci.a 00:03:12.100 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:12.100 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:12.100 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:12.100 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:12.100 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:12.358 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:12.358 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:12.358 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:12.358 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:12.358 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.358 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:12.358 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:12.358 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:12.358 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:12.358 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:12.358 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:12.358 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:12.358 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:12.358 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:12.617 [147/268] Linking static target lib/librte_ethdev.a 00:03:12.876 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:12.876 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:12.876 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:12.876 [151/268] Linking static target lib/librte_cmdline.a 00:03:13.134 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:13.134 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:13.134 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:13.392 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:13.392 [156/268] Linking static target lib/librte_timer.a 00:03:13.392 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:13.392 [158/268] Linking static target lib/librte_hash.a 00:03:13.650 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:13.650 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:13.650 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:13.650 [162/268] Linking static target lib/librte_compressdev.a 00:03:13.910 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:13.910 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.910 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:14.480 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:14.480 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:14.480 [168/268] Linking static target lib/librte_dmadev.a 00:03:14.480 [169/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:14.480 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:14.480 [171/268] Linking static target lib/librte_cryptodev.a 00:03:14.480 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.480 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:14.480 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:14.739 [175/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:14.739 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.739 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.998 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:15.258 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:15.258 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.258 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:15.258 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:15.258 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:15.258 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:15.258 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:15.258 [186/268] Linking static target lib/librte_power.a 00:03:15.825 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:15.825 [188/268] Linking static target lib/librte_reorder.a 00:03:15.825 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:15.825 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:15.825 [191/268] Linking static target lib/librte_security.a 00:03:15.825 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:16.083 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:16.083 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.083 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:16.341 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.341 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.598 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:16.598 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.855 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:16.855 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:16.855 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:16.855 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:17.114 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:17.114 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:17.114 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:17.372 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:17.372 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:17.372 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:17.372 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:17.372 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:17.372 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:17.630 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:17.630 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:17.630 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:17.630 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:17.630 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:17.630 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:17.630 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:17.630 [220/268] Linking static target drivers/librte_bus_pci.a 00:03:17.630 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:17.630 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:17.887 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.887 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:17.887 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:17.887 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:17.887 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:18.143 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.707 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:18.707 [230/268] Linking static target lib/librte_vhost.a 00:03:19.273 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.273 [232/268] Linking target lib/librte_eal.so.24.1 00:03:19.531 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:19.531 [234/268] Linking target lib/librte_ring.so.24.1 00:03:19.531 [235/268] Linking target lib/librte_pci.so.24.1 00:03:19.531 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:19.531 [237/268] Linking target lib/librte_meter.so.24.1 00:03:19.531 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:19.531 [239/268] Linking target lib/librte_timer.so.24.1 00:03:19.789 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:19.789 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:19.789 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:19.789 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:19.789 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:19.789 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:19.789 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:19.789 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:19.789 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:19.789 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:20.047 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:20.047 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:20.047 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:20.047 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.047 [254/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.047 [255/268] Linking target lib/librte_net.so.24.1 00:03:20.047 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:20.047 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:20.047 [258/268] Linking target lib/librte_compressdev.so.24.1 00:03:20.305 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:20.305 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:20.305 [261/268] Linking target lib/librte_hash.so.24.1 00:03:20.305 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:20.305 [263/268] Linking target lib/librte_security.so.24.1 00:03:20.305 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:20.564 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:20.564 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:20.564 [267/268] Linking target lib/librte_power.so.24.1 00:03:20.564 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:20.564 INFO: autodetecting backend as ninja 00:03:20.564 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:21.937 CC lib/ut_mock/mock.o 00:03:21.937 CC lib/log/log.o 00:03:21.937 CC lib/log/log_flags.o 00:03:21.937 CC lib/log/log_deprecated.o 00:03:21.937 CC lib/ut/ut.o 00:03:21.937 LIB libspdk_ut_mock.a 00:03:21.937 LIB libspdk_log.a 00:03:21.937 LIB libspdk_ut.a 00:03:21.937 SO libspdk_ut_mock.so.6.0 00:03:21.937 SO libspdk_log.so.7.0 00:03:21.937 SO libspdk_ut.so.2.0 00:03:21.937 SYMLINK libspdk_ut_mock.so 00:03:22.194 SYMLINK libspdk_log.so 00:03:22.194 SYMLINK libspdk_ut.so 00:03:22.194 CC lib/dma/dma.o 00:03:22.194 CC lib/ioat/ioat.o 00:03:22.194 CXX lib/trace_parser/trace.o 00:03:22.194 CC lib/util/bit_array.o 00:03:22.194 CC lib/util/base64.o 00:03:22.194 CC lib/util/cpuset.o 00:03:22.194 CC lib/util/crc16.o 00:03:22.194 CC lib/util/crc32.o 00:03:22.194 CC lib/util/crc32c.o 00:03:22.451 CC lib/vfio_user/host/vfio_user_pci.o 00:03:22.451 CC lib/util/crc32_ieee.o 00:03:22.451 CC lib/util/crc64.o 00:03:22.451 CC lib/vfio_user/host/vfio_user.o 00:03:22.451 LIB libspdk_dma.a 00:03:22.451 CC lib/util/dif.o 00:03:22.451 CC lib/util/fd.o 00:03:22.451 CC lib/util/file.o 00:03:22.451 SO libspdk_dma.so.4.0 00:03:22.709 LIB libspdk_ioat.a 00:03:22.709 SYMLINK libspdk_dma.so 00:03:22.709 CC lib/util/hexlify.o 00:03:22.709 CC lib/util/iov.o 00:03:22.709 CC lib/util/math.o 00:03:22.709 CC lib/util/pipe.o 00:03:22.709 SO libspdk_ioat.so.7.0 00:03:22.709 LIB libspdk_vfio_user.a 00:03:22.709 CC lib/util/strerror_tls.o 00:03:22.709 CC lib/util/string.o 00:03:22.709 SYMLINK libspdk_ioat.so 00:03:22.709 CC lib/util/uuid.o 00:03:22.709 SO libspdk_vfio_user.so.5.0 00:03:22.709 SYMLINK libspdk_vfio_user.so 00:03:22.709 CC lib/util/fd_group.o 00:03:22.709 CC lib/util/xor.o 00:03:22.709 CC lib/util/zipf.o 00:03:22.967 LIB libspdk_util.a 00:03:23.225 SO libspdk_util.so.9.1 00:03:23.225 LIB libspdk_trace_parser.a 00:03:23.225 SO libspdk_trace_parser.so.5.0 00:03:23.483 SYMLINK libspdk_util.so 00:03:23.483 SYMLINK libspdk_trace_parser.so 00:03:23.483 CC lib/env_dpdk/env.o 00:03:23.483 CC lib/conf/conf.o 00:03:23.483 CC lib/env_dpdk/memory.o 00:03:23.483 CC lib/env_dpdk/pci.o 00:03:23.483 CC lib/json/json_parse.o 00:03:23.483 CC lib/rdma_utils/rdma_utils.o 00:03:23.483 CC lib/env_dpdk/init.o 00:03:23.483 CC lib/vmd/vmd.o 00:03:23.483 CC lib/idxd/idxd.o 00:03:23.483 CC lib/rdma_provider/common.o 00:03:23.741 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:23.741 CC lib/json/json_util.o 00:03:23.741 LIB libspdk_conf.a 00:03:23.741 SO libspdk_conf.so.6.0 00:03:23.741 LIB libspdk_rdma_utils.a 00:03:23.741 SO libspdk_rdma_utils.so.1.0 00:03:23.999 SYMLINK libspdk_conf.so 00:03:23.999 CC lib/idxd/idxd_user.o 00:03:23.999 SYMLINK libspdk_rdma_utils.so 00:03:23.999 CC lib/vmd/led.o 00:03:23.999 CC lib/json/json_write.o 00:03:23.999 CC lib/env_dpdk/threads.o 00:03:23.999 LIB libspdk_rdma_provider.a 00:03:23.999 SO libspdk_rdma_provider.so.6.0 00:03:23.999 CC lib/idxd/idxd_kernel.o 00:03:23.999 SYMLINK libspdk_rdma_provider.so 00:03:23.999 CC lib/env_dpdk/pci_ioat.o 00:03:23.999 CC lib/env_dpdk/pci_virtio.o 00:03:23.999 CC lib/env_dpdk/pci_vmd.o 00:03:23.999 CC lib/env_dpdk/pci_idxd.o 00:03:24.257 CC lib/env_dpdk/pci_event.o 00:03:24.257 CC lib/env_dpdk/sigbus_handler.o 00:03:24.257 LIB libspdk_idxd.a 00:03:24.257 CC lib/env_dpdk/pci_dpdk.o 00:03:24.257 LIB libspdk_json.a 00:03:24.257 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:24.257 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:24.257 LIB libspdk_vmd.a 00:03:24.257 SO libspdk_idxd.so.12.0 00:03:24.257 SO libspdk_json.so.6.0 00:03:24.257 SO libspdk_vmd.so.6.0 00:03:24.257 SYMLINK libspdk_idxd.so 00:03:24.257 SYMLINK libspdk_json.so 00:03:24.257 SYMLINK libspdk_vmd.so 00:03:24.515 CC lib/jsonrpc/jsonrpc_server.o 00:03:24.515 CC lib/jsonrpc/jsonrpc_client.o 00:03:24.515 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:24.515 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:24.773 LIB libspdk_jsonrpc.a 00:03:24.773 SO libspdk_jsonrpc.so.6.0 00:03:24.773 LIB libspdk_env_dpdk.a 00:03:25.031 SYMLINK libspdk_jsonrpc.so 00:03:25.031 SO libspdk_env_dpdk.so.14.1 00:03:25.289 SYMLINK libspdk_env_dpdk.so 00:03:25.289 CC lib/rpc/rpc.o 00:03:25.547 LIB libspdk_rpc.a 00:03:25.547 SO libspdk_rpc.so.6.0 00:03:25.547 SYMLINK libspdk_rpc.so 00:03:25.804 CC lib/keyring/keyring.o 00:03:25.804 CC lib/keyring/keyring_rpc.o 00:03:25.804 CC lib/notify/notify.o 00:03:25.804 CC lib/notify/notify_rpc.o 00:03:25.804 CC lib/trace/trace.o 00:03:25.804 CC lib/trace/trace_flags.o 00:03:25.804 CC lib/trace/trace_rpc.o 00:03:26.062 LIB libspdk_notify.a 00:03:26.062 SO libspdk_notify.so.6.0 00:03:26.062 LIB libspdk_keyring.a 00:03:26.062 SYMLINK libspdk_notify.so 00:03:26.062 SO libspdk_keyring.so.1.0 00:03:26.062 LIB libspdk_trace.a 00:03:26.062 SYMLINK libspdk_keyring.so 00:03:26.062 SO libspdk_trace.so.10.0 00:03:26.319 SYMLINK libspdk_trace.so 00:03:26.577 CC lib/thread/thread.o 00:03:26.577 CC lib/thread/iobuf.o 00:03:26.577 CC lib/sock/sock.o 00:03:26.577 CC lib/sock/sock_rpc.o 00:03:26.847 LIB libspdk_sock.a 00:03:27.118 SO libspdk_sock.so.10.0 00:03:27.118 SYMLINK libspdk_sock.so 00:03:27.376 CC lib/nvme/nvme_ctrlr.o 00:03:27.376 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:27.376 CC lib/nvme/nvme_fabric.o 00:03:27.376 CC lib/nvme/nvme_ns_cmd.o 00:03:27.376 CC lib/nvme/nvme_ns.o 00:03:27.376 CC lib/nvme/nvme_pcie_common.o 00:03:27.376 CC lib/nvme/nvme_pcie.o 00:03:27.376 CC lib/nvme/nvme.o 00:03:27.376 CC lib/nvme/nvme_qpair.o 00:03:27.942 LIB libspdk_thread.a 00:03:27.942 SO libspdk_thread.so.10.1 00:03:28.200 SYMLINK libspdk_thread.so 00:03:28.200 CC lib/nvme/nvme_quirks.o 00:03:28.200 CC lib/nvme/nvme_transport.o 00:03:28.200 CC lib/nvme/nvme_discovery.o 00:03:28.200 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:28.200 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:28.200 CC lib/nvme/nvme_tcp.o 00:03:28.458 CC lib/nvme/nvme_opal.o 00:03:28.458 CC lib/nvme/nvme_io_msg.o 00:03:28.458 CC lib/nvme/nvme_poll_group.o 00:03:28.716 CC lib/nvme/nvme_zns.o 00:03:28.716 CC lib/nvme/nvme_stubs.o 00:03:28.716 CC lib/nvme/nvme_auth.o 00:03:28.975 CC lib/nvme/nvme_cuse.o 00:03:28.975 CC lib/nvme/nvme_rdma.o 00:03:28.975 CC lib/accel/accel.o 00:03:28.975 CC lib/blob/blobstore.o 00:03:29.233 CC lib/blob/request.o 00:03:29.492 CC lib/blob/zeroes.o 00:03:29.492 CC lib/init/json_config.o 00:03:29.492 CC lib/virtio/virtio.o 00:03:29.492 CC lib/virtio/virtio_vhost_user.o 00:03:29.751 CC lib/virtio/virtio_vfio_user.o 00:03:29.751 CC lib/init/subsystem.o 00:03:29.751 CC lib/blob/blob_bs_dev.o 00:03:29.751 CC lib/accel/accel_rpc.o 00:03:29.751 CC lib/init/subsystem_rpc.o 00:03:29.751 CC lib/virtio/virtio_pci.o 00:03:29.751 CC lib/init/rpc.o 00:03:30.017 CC lib/accel/accel_sw.o 00:03:30.017 LIB libspdk_init.a 00:03:30.017 SO libspdk_init.so.5.0 00:03:30.282 SYMLINK libspdk_init.so 00:03:30.282 LIB libspdk_virtio.a 00:03:30.282 LIB libspdk_accel.a 00:03:30.282 SO libspdk_virtio.so.7.0 00:03:30.282 SO libspdk_accel.so.15.1 00:03:30.282 SYMLINK libspdk_virtio.so 00:03:30.282 LIB libspdk_nvme.a 00:03:30.282 SYMLINK libspdk_accel.so 00:03:30.282 CC lib/event/app.o 00:03:30.282 CC lib/event/reactor.o 00:03:30.541 CC lib/event/log_rpc.o 00:03:30.541 CC lib/event/scheduler_static.o 00:03:30.541 CC lib/event/app_rpc.o 00:03:30.541 SO libspdk_nvme.so.13.1 00:03:30.541 CC lib/bdev/bdev_rpc.o 00:03:30.541 CC lib/bdev/bdev.o 00:03:30.541 CC lib/bdev/part.o 00:03:30.541 CC lib/bdev/bdev_zone.o 00:03:30.541 CC lib/bdev/scsi_nvme.o 00:03:30.799 SYMLINK libspdk_nvme.so 00:03:30.799 LIB libspdk_event.a 00:03:30.799 SO libspdk_event.so.14.0 00:03:31.057 SYMLINK libspdk_event.so 00:03:31.993 LIB libspdk_blob.a 00:03:31.993 SO libspdk_blob.so.11.0 00:03:31.993 SYMLINK libspdk_blob.so 00:03:32.251 CC lib/blobfs/blobfs.o 00:03:32.251 CC lib/blobfs/tree.o 00:03:32.251 CC lib/lvol/lvol.o 00:03:33.192 LIB libspdk_bdev.a 00:03:33.192 SO libspdk_bdev.so.15.1 00:03:33.192 LIB libspdk_blobfs.a 00:03:33.192 SO libspdk_blobfs.so.10.0 00:03:33.192 SYMLINK libspdk_bdev.so 00:03:33.192 LIB libspdk_lvol.a 00:03:33.192 SYMLINK libspdk_blobfs.so 00:03:33.450 SO libspdk_lvol.so.10.0 00:03:33.450 SYMLINK libspdk_lvol.so 00:03:33.450 CC lib/nbd/nbd.o 00:03:33.450 CC lib/nbd/nbd_rpc.o 00:03:33.450 CC lib/scsi/dev.o 00:03:33.450 CC lib/ftl/ftl_core.o 00:03:33.450 CC lib/scsi/lun.o 00:03:33.450 CC lib/ftl/ftl_init.o 00:03:33.450 CC lib/scsi/port.o 00:03:33.450 CC lib/scsi/scsi.o 00:03:33.450 CC lib/nvmf/ctrlr.o 00:03:33.450 CC lib/ublk/ublk.o 00:03:33.727 CC lib/scsi/scsi_bdev.o 00:03:33.727 CC lib/ublk/ublk_rpc.o 00:03:33.727 CC lib/nvmf/ctrlr_discovery.o 00:03:33.727 CC lib/ftl/ftl_layout.o 00:03:33.727 CC lib/scsi/scsi_pr.o 00:03:33.727 CC lib/nvmf/ctrlr_bdev.o 00:03:33.727 CC lib/nvmf/subsystem.o 00:03:33.991 CC lib/ftl/ftl_debug.o 00:03:33.991 LIB libspdk_nbd.a 00:03:33.991 SO libspdk_nbd.so.7.0 00:03:33.991 CC lib/ftl/ftl_io.o 00:03:33.991 CC lib/ftl/ftl_sb.o 00:03:33.991 CC lib/scsi/scsi_rpc.o 00:03:33.991 SYMLINK libspdk_nbd.so 00:03:33.991 CC lib/ftl/ftl_l2p.o 00:03:34.250 CC lib/ftl/ftl_l2p_flat.o 00:03:34.250 LIB libspdk_ublk.a 00:03:34.250 SO libspdk_ublk.so.3.0 00:03:34.250 CC lib/scsi/task.o 00:03:34.250 SYMLINK libspdk_ublk.so 00:03:34.250 CC lib/ftl/ftl_nv_cache.o 00:03:34.250 CC lib/nvmf/nvmf.o 00:03:34.250 CC lib/nvmf/nvmf_rpc.o 00:03:34.250 CC lib/ftl/ftl_band.o 00:03:34.250 CC lib/ftl/ftl_band_ops.o 00:03:34.250 CC lib/nvmf/transport.o 00:03:34.509 LIB libspdk_scsi.a 00:03:34.509 CC lib/nvmf/tcp.o 00:03:34.509 SO libspdk_scsi.so.9.0 00:03:34.509 SYMLINK libspdk_scsi.so 00:03:34.509 CC lib/nvmf/stubs.o 00:03:34.767 CC lib/ftl/ftl_writer.o 00:03:34.767 CC lib/ftl/ftl_rq.o 00:03:34.767 CC lib/ftl/ftl_reloc.o 00:03:34.767 CC lib/nvmf/mdns_server.o 00:03:35.025 CC lib/nvmf/rdma.o 00:03:35.025 CC lib/ftl/ftl_l2p_cache.o 00:03:35.025 CC lib/nvmf/auth.o 00:03:35.025 CC lib/ftl/ftl_p2l.o 00:03:35.284 CC lib/ftl/mngt/ftl_mngt.o 00:03:35.284 CC lib/iscsi/conn.o 00:03:35.284 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:35.284 CC lib/vhost/vhost.o 00:03:35.284 CC lib/vhost/vhost_rpc.o 00:03:35.542 CC lib/iscsi/init_grp.o 00:03:35.542 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:35.542 CC lib/vhost/vhost_scsi.o 00:03:35.542 CC lib/vhost/vhost_blk.o 00:03:35.800 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:35.800 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:35.800 CC lib/iscsi/iscsi.o 00:03:35.800 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:35.800 CC lib/vhost/rte_vhost_user.o 00:03:36.059 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:36.059 CC lib/iscsi/md5.o 00:03:36.059 CC lib/iscsi/param.o 00:03:36.059 CC lib/iscsi/portal_grp.o 00:03:36.059 CC lib/iscsi/tgt_node.o 00:03:36.317 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:36.317 CC lib/iscsi/iscsi_subsystem.o 00:03:36.317 CC lib/iscsi/iscsi_rpc.o 00:03:36.317 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:36.317 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:36.317 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:36.575 CC lib/iscsi/task.o 00:03:36.575 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:36.575 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:36.575 CC lib/ftl/utils/ftl_conf.o 00:03:36.575 CC lib/ftl/utils/ftl_md.o 00:03:36.832 CC lib/ftl/utils/ftl_mempool.o 00:03:36.832 CC lib/ftl/utils/ftl_bitmap.o 00:03:36.832 CC lib/ftl/utils/ftl_property.o 00:03:36.832 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:36.832 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:36.832 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:36.832 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:37.090 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:37.090 LIB libspdk_vhost.a 00:03:37.090 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:37.090 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:37.090 SO libspdk_vhost.so.8.0 00:03:37.090 LIB libspdk_iscsi.a 00:03:37.090 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:37.090 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:37.090 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:37.090 LIB libspdk_nvmf.a 00:03:37.090 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:37.090 SYMLINK libspdk_vhost.so 00:03:37.091 CC lib/ftl/base/ftl_base_dev.o 00:03:37.091 SO libspdk_iscsi.so.8.0 00:03:37.348 CC lib/ftl/base/ftl_base_bdev.o 00:03:37.348 CC lib/ftl/ftl_trace.o 00:03:37.348 SO libspdk_nvmf.so.19.0 00:03:37.348 SYMLINK libspdk_iscsi.so 00:03:37.348 SYMLINK libspdk_nvmf.so 00:03:37.606 LIB libspdk_ftl.a 00:03:37.606 SO libspdk_ftl.so.9.0 00:03:38.173 SYMLINK libspdk_ftl.so 00:03:38.432 CC module/env_dpdk/env_dpdk_rpc.o 00:03:38.432 CC module/scheduler/gscheduler/gscheduler.o 00:03:38.432 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:38.432 CC module/blob/bdev/blob_bdev.o 00:03:38.432 CC module/sock/posix/posix.o 00:03:38.432 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:38.432 CC module/sock/uring/uring.o 00:03:38.432 CC module/accel/ioat/accel_ioat.o 00:03:38.432 CC module/accel/error/accel_error.o 00:03:38.432 CC module/keyring/file/keyring.o 00:03:38.432 LIB libspdk_env_dpdk_rpc.a 00:03:38.432 SO libspdk_env_dpdk_rpc.so.6.0 00:03:38.690 SYMLINK libspdk_env_dpdk_rpc.so 00:03:38.690 CC module/accel/error/accel_error_rpc.o 00:03:38.690 LIB libspdk_scheduler_gscheduler.a 00:03:38.690 LIB libspdk_scheduler_dpdk_governor.a 00:03:38.690 CC module/keyring/file/keyring_rpc.o 00:03:38.690 SO libspdk_scheduler_gscheduler.so.4.0 00:03:38.690 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:38.690 LIB libspdk_scheduler_dynamic.a 00:03:38.690 CC module/accel/ioat/accel_ioat_rpc.o 00:03:38.690 SO libspdk_scheduler_dynamic.so.4.0 00:03:38.690 SYMLINK libspdk_scheduler_gscheduler.so 00:03:38.690 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:38.690 LIB libspdk_blob_bdev.a 00:03:38.690 SYMLINK libspdk_scheduler_dynamic.so 00:03:38.690 LIB libspdk_accel_error.a 00:03:38.690 SO libspdk_blob_bdev.so.11.0 00:03:38.690 LIB libspdk_keyring_file.a 00:03:38.690 SO libspdk_accel_error.so.2.0 00:03:38.690 LIB libspdk_accel_ioat.a 00:03:38.690 SYMLINK libspdk_blob_bdev.so 00:03:38.690 SO libspdk_keyring_file.so.1.0 00:03:38.948 SO libspdk_accel_ioat.so.6.0 00:03:38.948 SYMLINK libspdk_accel_error.so 00:03:38.948 SYMLINK libspdk_keyring_file.so 00:03:38.948 CC module/accel/dsa/accel_dsa.o 00:03:38.948 CC module/accel/dsa/accel_dsa_rpc.o 00:03:38.948 CC module/accel/iaa/accel_iaa.o 00:03:38.948 CC module/accel/iaa/accel_iaa_rpc.o 00:03:38.948 SYMLINK libspdk_accel_ioat.so 00:03:38.948 CC module/keyring/linux/keyring.o 00:03:38.948 CC module/keyring/linux/keyring_rpc.o 00:03:39.206 LIB libspdk_keyring_linux.a 00:03:39.206 CC module/bdev/delay/vbdev_delay.o 00:03:39.206 LIB libspdk_accel_iaa.a 00:03:39.206 SO libspdk_keyring_linux.so.1.0 00:03:39.206 CC module/blobfs/bdev/blobfs_bdev.o 00:03:39.206 LIB libspdk_sock_uring.a 00:03:39.206 SO libspdk_accel_iaa.so.3.0 00:03:39.206 LIB libspdk_accel_dsa.a 00:03:39.206 LIB libspdk_sock_posix.a 00:03:39.206 SO libspdk_sock_uring.so.5.0 00:03:39.206 SYMLINK libspdk_keyring_linux.so 00:03:39.206 SO libspdk_accel_dsa.so.5.0 00:03:39.206 CC module/bdev/error/vbdev_error.o 00:03:39.206 CC module/bdev/error/vbdev_error_rpc.o 00:03:39.206 CC module/bdev/gpt/gpt.o 00:03:39.206 SO libspdk_sock_posix.so.6.0 00:03:39.206 SYMLINK libspdk_accel_iaa.so 00:03:39.206 CC module/bdev/lvol/vbdev_lvol.o 00:03:39.206 SYMLINK libspdk_sock_uring.so 00:03:39.206 CC module/bdev/gpt/vbdev_gpt.o 00:03:39.206 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:39.206 SYMLINK libspdk_accel_dsa.so 00:03:39.206 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:39.206 SYMLINK libspdk_sock_posix.so 00:03:39.206 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:39.465 LIB libspdk_bdev_error.a 00:03:39.465 LIB libspdk_bdev_delay.a 00:03:39.465 CC module/bdev/malloc/bdev_malloc.o 00:03:39.465 SO libspdk_bdev_error.so.6.0 00:03:39.465 LIB libspdk_blobfs_bdev.a 00:03:39.465 SO libspdk_bdev_delay.so.6.0 00:03:39.465 LIB libspdk_bdev_gpt.a 00:03:39.465 CC module/bdev/null/bdev_null.o 00:03:39.465 SO libspdk_blobfs_bdev.so.6.0 00:03:39.465 SO libspdk_bdev_gpt.so.6.0 00:03:39.465 CC module/bdev/nvme/bdev_nvme.o 00:03:39.465 SYMLINK libspdk_bdev_error.so 00:03:39.465 SYMLINK libspdk_bdev_delay.so 00:03:39.723 CC module/bdev/passthru/vbdev_passthru.o 00:03:39.723 SYMLINK libspdk_blobfs_bdev.so 00:03:39.723 SYMLINK libspdk_bdev_gpt.so 00:03:39.723 CC module/bdev/null/bdev_null_rpc.o 00:03:39.723 CC module/bdev/raid/bdev_raid.o 00:03:39.723 CC module/bdev/split/vbdev_split.o 00:03:39.723 LIB libspdk_bdev_lvol.a 00:03:39.723 SO libspdk_bdev_lvol.so.6.0 00:03:39.723 LIB libspdk_bdev_null.a 00:03:39.723 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:39.723 CC module/bdev/uring/bdev_uring.o 00:03:39.723 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:39.723 SO libspdk_bdev_null.so.6.0 00:03:39.982 SYMLINK libspdk_bdev_lvol.so 00:03:39.982 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:39.982 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:39.982 SYMLINK libspdk_bdev_null.so 00:03:39.982 CC module/bdev/aio/bdev_aio.o 00:03:39.982 CC module/bdev/split/vbdev_split_rpc.o 00:03:39.982 LIB libspdk_bdev_malloc.a 00:03:39.982 SO libspdk_bdev_malloc.so.6.0 00:03:39.982 LIB libspdk_bdev_passthru.a 00:03:40.242 SYMLINK libspdk_bdev_malloc.so 00:03:40.242 CC module/bdev/aio/bdev_aio_rpc.o 00:03:40.242 SO libspdk_bdev_passthru.so.6.0 00:03:40.242 CC module/bdev/ftl/bdev_ftl.o 00:03:40.242 LIB libspdk_bdev_zone_block.a 00:03:40.242 LIB libspdk_bdev_split.a 00:03:40.242 CC module/bdev/uring/bdev_uring_rpc.o 00:03:40.242 SO libspdk_bdev_zone_block.so.6.0 00:03:40.242 SYMLINK libspdk_bdev_passthru.so 00:03:40.242 SO libspdk_bdev_split.so.6.0 00:03:40.242 CC module/bdev/iscsi/bdev_iscsi.o 00:03:40.242 SYMLINK libspdk_bdev_zone_block.so 00:03:40.242 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:40.242 SYMLINK libspdk_bdev_split.so 00:03:40.242 CC module/bdev/raid/bdev_raid_rpc.o 00:03:40.242 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:40.242 LIB libspdk_bdev_aio.a 00:03:40.242 SO libspdk_bdev_aio.so.6.0 00:03:40.242 LIB libspdk_bdev_uring.a 00:03:40.501 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:40.501 SO libspdk_bdev_uring.so.6.0 00:03:40.501 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:40.501 SYMLINK libspdk_bdev_aio.so 00:03:40.501 CC module/bdev/raid/bdev_raid_sb.o 00:03:40.501 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:40.501 SYMLINK libspdk_bdev_uring.so 00:03:40.501 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:40.501 CC module/bdev/raid/raid0.o 00:03:40.501 LIB libspdk_bdev_iscsi.a 00:03:40.501 LIB libspdk_bdev_ftl.a 00:03:40.761 SO libspdk_bdev_iscsi.so.6.0 00:03:40.761 SO libspdk_bdev_ftl.so.6.0 00:03:40.761 CC module/bdev/raid/raid1.o 00:03:40.761 CC module/bdev/nvme/nvme_rpc.o 00:03:40.761 SYMLINK libspdk_bdev_iscsi.so 00:03:40.761 CC module/bdev/nvme/bdev_mdns_client.o 00:03:40.761 CC module/bdev/nvme/vbdev_opal.o 00:03:40.761 CC module/bdev/raid/concat.o 00:03:40.761 SYMLINK libspdk_bdev_ftl.so 00:03:40.761 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:40.761 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:41.027 LIB libspdk_bdev_virtio.a 00:03:41.027 SO libspdk_bdev_virtio.so.6.0 00:03:41.027 LIB libspdk_bdev_raid.a 00:03:41.027 SO libspdk_bdev_raid.so.6.0 00:03:41.027 SYMLINK libspdk_bdev_virtio.so 00:03:41.027 SYMLINK libspdk_bdev_raid.so 00:03:41.962 LIB libspdk_bdev_nvme.a 00:03:41.962 SO libspdk_bdev_nvme.so.7.0 00:03:41.962 SYMLINK libspdk_bdev_nvme.so 00:03:42.528 CC module/event/subsystems/scheduler/scheduler.o 00:03:42.528 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:42.528 CC module/event/subsystems/iobuf/iobuf.o 00:03:42.528 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:42.528 CC module/event/subsystems/keyring/keyring.o 00:03:42.528 CC module/event/subsystems/vmd/vmd.o 00:03:42.528 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:42.528 CC module/event/subsystems/sock/sock.o 00:03:42.528 LIB libspdk_event_keyring.a 00:03:42.528 LIB libspdk_event_scheduler.a 00:03:42.529 LIB libspdk_event_vhost_blk.a 00:03:42.529 LIB libspdk_event_vmd.a 00:03:42.529 LIB libspdk_event_sock.a 00:03:42.529 LIB libspdk_event_iobuf.a 00:03:42.529 SO libspdk_event_keyring.so.1.0 00:03:42.529 SO libspdk_event_vhost_blk.so.3.0 00:03:42.529 SO libspdk_event_scheduler.so.4.0 00:03:42.529 SO libspdk_event_vmd.so.6.0 00:03:42.529 SO libspdk_event_sock.so.5.0 00:03:42.529 SO libspdk_event_iobuf.so.3.0 00:03:42.529 SYMLINK libspdk_event_scheduler.so 00:03:42.529 SYMLINK libspdk_event_keyring.so 00:03:42.529 SYMLINK libspdk_event_vhost_blk.so 00:03:42.787 SYMLINK libspdk_event_vmd.so 00:03:42.787 SYMLINK libspdk_event_sock.so 00:03:42.787 SYMLINK libspdk_event_iobuf.so 00:03:43.046 CC module/event/subsystems/accel/accel.o 00:03:43.046 LIB libspdk_event_accel.a 00:03:43.046 SO libspdk_event_accel.so.6.0 00:03:43.305 SYMLINK libspdk_event_accel.so 00:03:43.564 CC module/event/subsystems/bdev/bdev.o 00:03:43.564 LIB libspdk_event_bdev.a 00:03:43.823 SO libspdk_event_bdev.so.6.0 00:03:43.823 SYMLINK libspdk_event_bdev.so 00:03:44.082 CC module/event/subsystems/scsi/scsi.o 00:03:44.082 CC module/event/subsystems/ublk/ublk.o 00:03:44.082 CC module/event/subsystems/nbd/nbd.o 00:03:44.082 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:44.082 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:44.082 LIB libspdk_event_nbd.a 00:03:44.082 LIB libspdk_event_ublk.a 00:03:44.082 LIB libspdk_event_scsi.a 00:03:44.340 SO libspdk_event_nbd.so.6.0 00:03:44.340 SO libspdk_event_ublk.so.3.0 00:03:44.340 SO libspdk_event_scsi.so.6.0 00:03:44.340 SYMLINK libspdk_event_nbd.so 00:03:44.340 LIB libspdk_event_nvmf.a 00:03:44.340 SYMLINK libspdk_event_ublk.so 00:03:44.340 SYMLINK libspdk_event_scsi.so 00:03:44.340 SO libspdk_event_nvmf.so.6.0 00:03:44.340 SYMLINK libspdk_event_nvmf.so 00:03:44.599 CC module/event/subsystems/iscsi/iscsi.o 00:03:44.599 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:44.599 LIB libspdk_event_vhost_scsi.a 00:03:44.858 LIB libspdk_event_iscsi.a 00:03:44.858 SO libspdk_event_vhost_scsi.so.3.0 00:03:44.858 SO libspdk_event_iscsi.so.6.0 00:03:44.858 SYMLINK libspdk_event_vhost_scsi.so 00:03:44.858 SYMLINK libspdk_event_iscsi.so 00:03:45.116 SO libspdk.so.6.0 00:03:45.116 SYMLINK libspdk.so 00:03:45.116 CC app/trace_record/trace_record.o 00:03:45.116 CXX app/trace/trace.o 00:03:45.375 TEST_HEADER include/spdk/accel.h 00:03:45.375 TEST_HEADER include/spdk/accel_module.h 00:03:45.375 TEST_HEADER include/spdk/assert.h 00:03:45.375 TEST_HEADER include/spdk/barrier.h 00:03:45.375 TEST_HEADER include/spdk/base64.h 00:03:45.375 TEST_HEADER include/spdk/bdev.h 00:03:45.375 TEST_HEADER include/spdk/bdev_module.h 00:03:45.375 TEST_HEADER include/spdk/bdev_zone.h 00:03:45.375 TEST_HEADER include/spdk/bit_array.h 00:03:45.375 TEST_HEADER include/spdk/bit_pool.h 00:03:45.375 TEST_HEADER include/spdk/blob_bdev.h 00:03:45.375 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:45.375 CC app/nvmf_tgt/nvmf_main.o 00:03:45.375 TEST_HEADER include/spdk/blobfs.h 00:03:45.375 TEST_HEADER include/spdk/blob.h 00:03:45.375 TEST_HEADER include/spdk/conf.h 00:03:45.375 TEST_HEADER include/spdk/config.h 00:03:45.375 TEST_HEADER include/spdk/cpuset.h 00:03:45.375 TEST_HEADER include/spdk/crc16.h 00:03:45.375 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:45.375 TEST_HEADER include/spdk/crc32.h 00:03:45.375 TEST_HEADER include/spdk/crc64.h 00:03:45.375 TEST_HEADER include/spdk/dif.h 00:03:45.375 TEST_HEADER include/spdk/dma.h 00:03:45.375 TEST_HEADER include/spdk/endian.h 00:03:45.375 TEST_HEADER include/spdk/env_dpdk.h 00:03:45.375 TEST_HEADER include/spdk/env.h 00:03:45.375 TEST_HEADER include/spdk/event.h 00:03:45.375 TEST_HEADER include/spdk/fd_group.h 00:03:45.375 TEST_HEADER include/spdk/fd.h 00:03:45.375 TEST_HEADER include/spdk/file.h 00:03:45.375 TEST_HEADER include/spdk/ftl.h 00:03:45.375 TEST_HEADER include/spdk/gpt_spec.h 00:03:45.375 TEST_HEADER include/spdk/hexlify.h 00:03:45.375 CC examples/util/zipf/zipf.o 00:03:45.375 TEST_HEADER include/spdk/histogram_data.h 00:03:45.375 TEST_HEADER include/spdk/idxd.h 00:03:45.375 TEST_HEADER include/spdk/idxd_spec.h 00:03:45.375 TEST_HEADER include/spdk/init.h 00:03:45.375 TEST_HEADER include/spdk/ioat.h 00:03:45.375 CC examples/ioat/perf/perf.o 00:03:45.375 TEST_HEADER include/spdk/ioat_spec.h 00:03:45.375 CC test/thread/poller_perf/poller_perf.o 00:03:45.375 TEST_HEADER include/spdk/iscsi_spec.h 00:03:45.375 TEST_HEADER include/spdk/json.h 00:03:45.375 TEST_HEADER include/spdk/jsonrpc.h 00:03:45.375 TEST_HEADER include/spdk/keyring.h 00:03:45.375 TEST_HEADER include/spdk/keyring_module.h 00:03:45.375 TEST_HEADER include/spdk/likely.h 00:03:45.375 TEST_HEADER include/spdk/log.h 00:03:45.375 TEST_HEADER include/spdk/lvol.h 00:03:45.375 CC test/dma/test_dma/test_dma.o 00:03:45.375 TEST_HEADER include/spdk/memory.h 00:03:45.375 TEST_HEADER include/spdk/mmio.h 00:03:45.375 TEST_HEADER include/spdk/nbd.h 00:03:45.375 CC test/app/bdev_svc/bdev_svc.o 00:03:45.375 TEST_HEADER include/spdk/notify.h 00:03:45.375 TEST_HEADER include/spdk/nvme.h 00:03:45.375 TEST_HEADER include/spdk/nvme_intel.h 00:03:45.375 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:45.375 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:45.375 TEST_HEADER include/spdk/nvme_spec.h 00:03:45.375 TEST_HEADER include/spdk/nvme_zns.h 00:03:45.375 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:45.375 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:45.375 TEST_HEADER include/spdk/nvmf.h 00:03:45.375 TEST_HEADER include/spdk/nvmf_spec.h 00:03:45.375 TEST_HEADER include/spdk/nvmf_transport.h 00:03:45.375 TEST_HEADER include/spdk/opal.h 00:03:45.375 TEST_HEADER include/spdk/opal_spec.h 00:03:45.375 TEST_HEADER include/spdk/pci_ids.h 00:03:45.375 TEST_HEADER include/spdk/pipe.h 00:03:45.375 TEST_HEADER include/spdk/queue.h 00:03:45.375 TEST_HEADER include/spdk/reduce.h 00:03:45.375 TEST_HEADER include/spdk/rpc.h 00:03:45.375 TEST_HEADER include/spdk/scheduler.h 00:03:45.375 TEST_HEADER include/spdk/scsi.h 00:03:45.375 TEST_HEADER include/spdk/scsi_spec.h 00:03:45.375 TEST_HEADER include/spdk/sock.h 00:03:45.375 TEST_HEADER include/spdk/stdinc.h 00:03:45.375 TEST_HEADER include/spdk/string.h 00:03:45.375 TEST_HEADER include/spdk/thread.h 00:03:45.375 TEST_HEADER include/spdk/trace.h 00:03:45.375 TEST_HEADER include/spdk/trace_parser.h 00:03:45.375 TEST_HEADER include/spdk/tree.h 00:03:45.375 TEST_HEADER include/spdk/ublk.h 00:03:45.375 TEST_HEADER include/spdk/util.h 00:03:45.375 TEST_HEADER include/spdk/uuid.h 00:03:45.375 TEST_HEADER include/spdk/version.h 00:03:45.634 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:45.634 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:45.634 TEST_HEADER include/spdk/vhost.h 00:03:45.634 TEST_HEADER include/spdk/vmd.h 00:03:45.634 TEST_HEADER include/spdk/xor.h 00:03:45.634 TEST_HEADER include/spdk/zipf.h 00:03:45.634 CXX test/cpp_headers/accel.o 00:03:45.634 LINK interrupt_tgt 00:03:45.634 LINK zipf 00:03:45.634 LINK nvmf_tgt 00:03:45.634 LINK spdk_trace_record 00:03:45.634 LINK poller_perf 00:03:45.634 LINK ioat_perf 00:03:45.634 LINK bdev_svc 00:03:45.634 CXX test/cpp_headers/assert.o 00:03:45.634 CXX test/cpp_headers/accel_module.o 00:03:45.634 CXX test/cpp_headers/barrier.o 00:03:45.892 LINK spdk_trace 00:03:45.892 CXX test/cpp_headers/base64.o 00:03:45.892 CXX test/cpp_headers/bdev.o 00:03:45.892 CXX test/cpp_headers/bdev_module.o 00:03:45.892 LINK test_dma 00:03:45.892 CC examples/ioat/verify/verify.o 00:03:45.892 CXX test/cpp_headers/bdev_zone.o 00:03:45.892 CXX test/cpp_headers/bit_array.o 00:03:45.892 CC test/app/histogram_perf/histogram_perf.o 00:03:46.151 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:46.151 CC app/iscsi_tgt/iscsi_tgt.o 00:03:46.151 CC examples/thread/thread/thread_ex.o 00:03:46.151 LINK verify 00:03:46.151 LINK histogram_perf 00:03:46.151 CXX test/cpp_headers/bit_pool.o 00:03:46.151 CC test/env/mem_callbacks/mem_callbacks.o 00:03:46.151 CC test/event/event_perf/event_perf.o 00:03:46.151 CC app/spdk_lspci/spdk_lspci.o 00:03:46.151 CC app/spdk_tgt/spdk_tgt.o 00:03:46.409 CXX test/cpp_headers/blob_bdev.o 00:03:46.409 LINK iscsi_tgt 00:03:46.410 LINK event_perf 00:03:46.410 CC app/spdk_nvme_perf/perf.o 00:03:46.410 LINK thread 00:03:46.410 CC app/spdk_nvme_identify/identify.o 00:03:46.410 LINK spdk_lspci 00:03:46.410 LINK nvme_fuzz 00:03:46.410 CXX test/cpp_headers/blobfs_bdev.o 00:03:46.410 LINK spdk_tgt 00:03:46.668 CC test/event/reactor/reactor.o 00:03:46.668 CC test/rpc_client/rpc_client_test.o 00:03:46.668 CC test/event/reactor_perf/reactor_perf.o 00:03:46.668 LINK reactor 00:03:46.668 CXX test/cpp_headers/blobfs.o 00:03:46.668 CXX test/cpp_headers/blob.o 00:03:46.668 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:46.668 CC examples/sock/hello_world/hello_sock.o 00:03:46.668 LINK mem_callbacks 00:03:46.668 LINK reactor_perf 00:03:46.668 LINK rpc_client_test 00:03:46.927 CXX test/cpp_headers/conf.o 00:03:46.927 CXX test/cpp_headers/config.o 00:03:46.927 CXX test/cpp_headers/cpuset.o 00:03:46.927 CC test/env/vtophys/vtophys.o 00:03:46.927 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:46.927 LINK hello_sock 00:03:46.927 CC test/event/app_repeat/app_repeat.o 00:03:46.927 CC app/spdk_nvme_discover/discovery_aer.o 00:03:47.185 CXX test/cpp_headers/crc16.o 00:03:47.185 LINK vtophys 00:03:47.185 LINK env_dpdk_post_init 00:03:47.185 CC test/app/jsoncat/jsoncat.o 00:03:47.185 LINK spdk_nvme_identify 00:03:47.185 LINK app_repeat 00:03:47.185 LINK spdk_nvme_perf 00:03:47.185 LINK spdk_nvme_discover 00:03:47.185 CXX test/cpp_headers/crc32.o 00:03:47.185 LINK jsoncat 00:03:47.185 CC examples/vmd/lsvmd/lsvmd.o 00:03:47.443 CC test/env/memory/memory_ut.o 00:03:47.444 CC test/app/stub/stub.o 00:03:47.444 CC app/spdk_top/spdk_top.o 00:03:47.444 LINK lsvmd 00:03:47.444 CXX test/cpp_headers/crc64.o 00:03:47.444 CC test/event/scheduler/scheduler.o 00:03:47.444 CC examples/vmd/led/led.o 00:03:47.707 LINK stub 00:03:47.707 CXX test/cpp_headers/dif.o 00:03:47.707 CC test/accel/dif/dif.o 00:03:47.707 CC test/blobfs/mkfs/mkfs.o 00:03:47.707 LINK led 00:03:47.707 LINK scheduler 00:03:47.707 CC app/vhost/vhost.o 00:03:47.707 CXX test/cpp_headers/dma.o 00:03:47.965 CC app/spdk_dd/spdk_dd.o 00:03:47.965 LINK mkfs 00:03:47.965 CXX test/cpp_headers/endian.o 00:03:47.965 LINK vhost 00:03:47.965 CC examples/idxd/perf/perf.o 00:03:47.965 LINK dif 00:03:47.965 CXX test/cpp_headers/env_dpdk.o 00:03:48.224 CC test/lvol/esnap/esnap.o 00:03:48.224 CXX test/cpp_headers/env.o 00:03:48.224 CC app/fio/nvme/fio_plugin.o 00:03:48.224 LINK spdk_top 00:03:48.224 LINK spdk_dd 00:03:48.224 LINK iscsi_fuzz 00:03:48.224 CC examples/accel/perf/accel_perf.o 00:03:48.224 LINK idxd_perf 00:03:48.483 CXX test/cpp_headers/event.o 00:03:48.483 CC app/fio/bdev/fio_plugin.o 00:03:48.483 CXX test/cpp_headers/fd_group.o 00:03:48.483 CXX test/cpp_headers/fd.o 00:03:48.483 LINK memory_ut 00:03:48.483 CXX test/cpp_headers/file.o 00:03:48.742 CXX test/cpp_headers/ftl.o 00:03:48.742 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:48.742 CC test/env/pci/pci_ut.o 00:03:48.742 CXX test/cpp_headers/gpt_spec.o 00:03:48.742 CXX test/cpp_headers/hexlify.o 00:03:48.742 LINK accel_perf 00:03:48.742 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:48.742 LINK spdk_nvme 00:03:48.742 CXX test/cpp_headers/histogram_data.o 00:03:48.742 CC examples/blob/hello_world/hello_blob.o 00:03:49.000 LINK spdk_bdev 00:03:49.000 CC examples/blob/cli/blobcli.o 00:03:49.000 CC test/nvme/aer/aer.o 00:03:49.000 CC test/nvme/reset/reset.o 00:03:49.000 CXX test/cpp_headers/idxd.o 00:03:49.000 LINK pci_ut 00:03:49.000 LINK hello_blob 00:03:49.258 CC test/nvme/sgl/sgl.o 00:03:49.258 CC test/bdev/bdevio/bdevio.o 00:03:49.258 LINK vhost_fuzz 00:03:49.258 CXX test/cpp_headers/idxd_spec.o 00:03:49.258 LINK aer 00:03:49.258 LINK reset 00:03:49.258 CXX test/cpp_headers/init.o 00:03:49.516 CC test/nvme/e2edp/nvme_dp.o 00:03:49.516 CC test/nvme/overhead/overhead.o 00:03:49.516 CC test/nvme/err_injection/err_injection.o 00:03:49.516 LINK sgl 00:03:49.516 LINK blobcli 00:03:49.516 LINK bdevio 00:03:49.516 CC test/nvme/startup/startup.o 00:03:49.516 CXX test/cpp_headers/ioat.o 00:03:49.516 CC test/nvme/reserve/reserve.o 00:03:49.516 CXX test/cpp_headers/ioat_spec.o 00:03:49.516 LINK err_injection 00:03:49.774 LINK nvme_dp 00:03:49.774 LINK overhead 00:03:49.774 LINK startup 00:03:49.774 CXX test/cpp_headers/iscsi_spec.o 00:03:49.774 LINK reserve 00:03:49.774 CC examples/nvme/hello_world/hello_world.o 00:03:49.774 CC examples/nvme/reconnect/reconnect.o 00:03:49.774 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:50.032 CC test/nvme/simple_copy/simple_copy.o 00:03:50.032 CXX test/cpp_headers/json.o 00:03:50.032 CC test/nvme/connect_stress/connect_stress.o 00:03:50.032 CC examples/bdev/hello_world/hello_bdev.o 00:03:50.032 CC examples/nvme/arbitration/arbitration.o 00:03:50.032 LINK hello_world 00:03:50.032 CC test/nvme/boot_partition/boot_partition.o 00:03:50.032 CXX test/cpp_headers/jsonrpc.o 00:03:50.032 LINK connect_stress 00:03:50.032 LINK simple_copy 00:03:50.290 LINK reconnect 00:03:50.290 CXX test/cpp_headers/keyring.o 00:03:50.290 LINK hello_bdev 00:03:50.290 LINK boot_partition 00:03:50.290 LINK arbitration 00:03:50.290 LINK nvme_manage 00:03:50.290 CXX test/cpp_headers/keyring_module.o 00:03:50.290 CC test/nvme/compliance/nvme_compliance.o 00:03:50.549 CC examples/nvme/hotplug/hotplug.o 00:03:50.549 CC examples/bdev/bdevperf/bdevperf.o 00:03:50.549 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:50.549 CXX test/cpp_headers/likely.o 00:03:50.549 CC examples/nvme/abort/abort.o 00:03:50.549 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:50.549 CC test/nvme/fused_ordering/fused_ordering.o 00:03:50.549 LINK hotplug 00:03:50.549 CXX test/cpp_headers/log.o 00:03:50.549 LINK cmb_copy 00:03:50.549 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:50.807 LINK pmr_persistence 00:03:50.807 LINK nvme_compliance 00:03:50.807 CXX test/cpp_headers/lvol.o 00:03:50.807 LINK fused_ordering 00:03:50.807 CXX test/cpp_headers/memory.o 00:03:50.807 LINK doorbell_aers 00:03:50.807 LINK abort 00:03:50.807 CXX test/cpp_headers/mmio.o 00:03:50.807 CC test/nvme/fdp/fdp.o 00:03:50.807 CC test/nvme/cuse/cuse.o 00:03:51.066 CXX test/cpp_headers/nbd.o 00:03:51.066 CXX test/cpp_headers/notify.o 00:03:51.066 CXX test/cpp_headers/nvme.o 00:03:51.066 CXX test/cpp_headers/nvme_intel.o 00:03:51.066 CXX test/cpp_headers/nvme_ocssd.o 00:03:51.066 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:51.066 CXX test/cpp_headers/nvme_spec.o 00:03:51.066 CXX test/cpp_headers/nvme_zns.o 00:03:51.066 LINK bdevperf 00:03:51.066 CXX test/cpp_headers/nvmf_cmd.o 00:03:51.324 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:51.324 CXX test/cpp_headers/nvmf.o 00:03:51.324 CXX test/cpp_headers/nvmf_spec.o 00:03:51.324 CXX test/cpp_headers/nvmf_transport.o 00:03:51.324 LINK fdp 00:03:51.324 CXX test/cpp_headers/opal.o 00:03:51.324 CXX test/cpp_headers/opal_spec.o 00:03:51.324 CXX test/cpp_headers/pci_ids.o 00:03:51.324 CXX test/cpp_headers/pipe.o 00:03:51.324 CXX test/cpp_headers/queue.o 00:03:51.324 CXX test/cpp_headers/reduce.o 00:03:51.324 CXX test/cpp_headers/rpc.o 00:03:51.583 CXX test/cpp_headers/scheduler.o 00:03:51.583 CXX test/cpp_headers/scsi.o 00:03:51.583 CXX test/cpp_headers/scsi_spec.o 00:03:51.583 CXX test/cpp_headers/sock.o 00:03:51.583 CXX test/cpp_headers/stdinc.o 00:03:51.583 CC examples/nvmf/nvmf/nvmf.o 00:03:51.583 CXX test/cpp_headers/string.o 00:03:51.583 CXX test/cpp_headers/thread.o 00:03:51.583 CXX test/cpp_headers/trace.o 00:03:51.841 CXX test/cpp_headers/trace_parser.o 00:03:51.841 CXX test/cpp_headers/tree.o 00:03:51.841 CXX test/cpp_headers/ublk.o 00:03:51.841 CXX test/cpp_headers/util.o 00:03:51.841 CXX test/cpp_headers/uuid.o 00:03:51.841 CXX test/cpp_headers/version.o 00:03:51.841 CXX test/cpp_headers/vfio_user_pci.o 00:03:51.841 CXX test/cpp_headers/vfio_user_spec.o 00:03:51.841 CXX test/cpp_headers/vhost.o 00:03:51.841 LINK nvmf 00:03:51.841 CXX test/cpp_headers/vmd.o 00:03:51.841 CXX test/cpp_headers/xor.o 00:03:51.841 CXX test/cpp_headers/zipf.o 00:03:52.100 LINK cuse 00:03:53.036 LINK esnap 00:03:53.294 00:03:53.294 real 1m2.153s 00:03:53.294 user 6m21.123s 00:03:53.294 sys 1m29.997s 00:03:53.294 16:52:43 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:53.294 16:52:43 make -- common/autotest_common.sh@10 -- $ set +x 00:03:53.294 ************************************ 00:03:53.294 END TEST make 00:03:53.294 ************************************ 00:03:53.294 16:52:43 -- common/autotest_common.sh@1142 -- $ return 0 00:03:53.294 16:52:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:53.294 16:52:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:53.294 16:52:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:53.294 16:52:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.294 16:52:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:53.294 16:52:43 -- pm/common@44 -- $ pid=5139 00:03:53.294 16:52:43 -- pm/common@50 -- $ kill -TERM 5139 00:03:53.294 16:52:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.294 16:52:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:53.294 16:52:43 -- pm/common@44 -- $ pid=5141 00:03:53.294 16:52:43 -- pm/common@50 -- $ kill -TERM 5141 00:03:53.553 16:52:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:53.553 16:52:43 -- nvmf/common.sh@7 -- # uname -s 00:03:53.553 16:52:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:53.553 16:52:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:53.553 16:52:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:53.553 16:52:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:53.553 16:52:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:53.553 16:52:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:53.553 16:52:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:53.553 16:52:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:53.553 16:52:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:53.553 16:52:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:53.553 16:52:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:03:53.553 16:52:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:03:53.553 16:52:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:53.553 16:52:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:53.553 16:52:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:53.553 16:52:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:53.553 16:52:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:53.553 16:52:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:53.553 16:52:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:53.553 16:52:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:53.553 16:52:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.553 16:52:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.553 16:52:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.553 16:52:43 -- paths/export.sh@5 -- # export PATH 00:03:53.553 16:52:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.553 16:52:43 -- nvmf/common.sh@47 -- # : 0 00:03:53.553 16:52:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:53.553 16:52:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:53.553 16:52:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:53.553 16:52:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:53.553 16:52:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:53.553 16:52:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:53.553 16:52:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:53.553 16:52:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:53.553 16:52:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:53.553 16:52:43 -- spdk/autotest.sh@32 -- # uname -s 00:03:53.553 16:52:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:53.553 16:52:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:53.553 16:52:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:53.553 16:52:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:53.553 16:52:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:53.553 16:52:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:53.553 16:52:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:53.553 16:52:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:53.553 16:52:43 -- spdk/autotest.sh@48 -- # udevadm_pid=52712 00:03:53.553 16:52:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:53.553 16:52:43 -- pm/common@17 -- # local monitor 00:03:53.553 16:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.553 16:52:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:53.553 16:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.553 16:52:43 -- pm/common@25 -- # sleep 1 00:03:53.553 16:52:43 -- pm/common@21 -- # date +%s 00:03:53.553 16:52:43 -- pm/common@21 -- # date +%s 00:03:53.553 16:52:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721062363 00:03:53.553 16:52:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721062363 00:03:53.553 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721062363_collect-vmstat.pm.log 00:03:53.553 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721062363_collect-cpu-load.pm.log 00:03:54.489 16:52:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:54.489 16:52:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:54.489 16:52:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:54.489 16:52:44 -- common/autotest_common.sh@10 -- # set +x 00:03:54.489 16:52:44 -- spdk/autotest.sh@59 -- # create_test_list 00:03:54.489 16:52:44 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:54.489 16:52:44 -- common/autotest_common.sh@10 -- # set +x 00:03:54.489 16:52:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:54.489 16:52:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:54.489 16:52:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:54.489 16:52:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:54.489 16:52:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:54.489 16:52:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:54.489 16:52:44 -- common/autotest_common.sh@1455 -- # uname 00:03:54.489 16:52:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:54.489 16:52:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:54.489 16:52:44 -- common/autotest_common.sh@1475 -- # uname 00:03:54.489 16:52:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:54.489 16:52:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:54.489 16:52:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:54.489 16:52:44 -- spdk/autotest.sh@72 -- # hash lcov 00:03:54.489 16:52:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:54.489 16:52:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:54.489 --rc lcov_branch_coverage=1 00:03:54.489 --rc lcov_function_coverage=1 00:03:54.489 --rc genhtml_branch_coverage=1 00:03:54.489 --rc genhtml_function_coverage=1 00:03:54.489 --rc genhtml_legend=1 00:03:54.489 --rc geninfo_all_blocks=1 00:03:54.489 ' 00:03:54.489 16:52:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:54.489 --rc lcov_branch_coverage=1 00:03:54.489 --rc lcov_function_coverage=1 00:03:54.489 --rc genhtml_branch_coverage=1 00:03:54.489 --rc genhtml_function_coverage=1 00:03:54.489 --rc genhtml_legend=1 00:03:54.489 --rc geninfo_all_blocks=1 00:03:54.489 ' 00:03:54.489 16:52:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:54.489 --rc lcov_branch_coverage=1 00:03:54.489 --rc lcov_function_coverage=1 00:03:54.489 --rc genhtml_branch_coverage=1 00:03:54.489 --rc genhtml_function_coverage=1 00:03:54.489 --rc genhtml_legend=1 00:03:54.489 --rc geninfo_all_blocks=1 00:03:54.489 --no-external' 00:03:54.489 16:52:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:54.489 --rc lcov_branch_coverage=1 00:03:54.489 --rc lcov_function_coverage=1 00:03:54.489 --rc genhtml_branch_coverage=1 00:03:54.489 --rc genhtml_function_coverage=1 00:03:54.489 --rc genhtml_legend=1 00:03:54.489 --rc geninfo_all_blocks=1 00:03:54.489 --no-external' 00:03:54.489 16:52:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:54.747 lcov: LCOV version 1.14 00:03:54.747 16:52:44 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:09.622 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:09.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:21.862 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:21.862 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:21.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:21.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:23.764 16:53:13 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:23.764 16:53:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:23.764 16:53:13 -- common/autotest_common.sh@10 -- # set +x 00:04:23.764 16:53:13 -- spdk/autotest.sh@91 -- # rm -f 00:04:23.764 16:53:13 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.332 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:24.332 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:24.332 16:53:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:24.332 16:53:14 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:24.332 16:53:14 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:24.332 16:53:14 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:24.332 16:53:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:24.332 16:53:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:24.332 16:53:14 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:24.332 16:53:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.332 16:53:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:24.332 16:53:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:24.332 16:53:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:24.332 16:53:14 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:24.332 16:53:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:24.332 16:53:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:24.332 16:53:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:24.332 16:53:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:24.332 16:53:14 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:24.332 16:53:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:24.332 16:53:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:24.332 16:53:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:24.332 16:53:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:24.332 16:53:14 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:24.333 16:53:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:24.333 16:53:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:24.333 16:53:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:24.333 16:53:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.333 16:53:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:24.333 16:53:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:24.333 16:53:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:24.333 16:53:14 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:24.592 No valid GPT data, bailing 00:04:24.592 16:53:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.592 16:53:14 -- scripts/common.sh@391 -- # pt= 00:04:24.592 16:53:14 -- scripts/common.sh@392 -- # return 1 00:04:24.592 16:53:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:24.592 1+0 records in 00:04:24.592 1+0 records out 00:04:24.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473223 s, 222 MB/s 00:04:24.592 16:53:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.592 16:53:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:24.592 16:53:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:24.592 16:53:14 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:24.592 16:53:14 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:24.592 No valid GPT data, bailing 00:04:24.593 16:53:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:24.593 16:53:14 -- scripts/common.sh@391 -- # pt= 00:04:24.593 16:53:14 -- scripts/common.sh@392 -- # return 1 00:04:24.593 16:53:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:24.593 1+0 records in 00:04:24.593 1+0 records out 00:04:24.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464194 s, 226 MB/s 00:04:24.593 16:53:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.593 16:53:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:24.593 16:53:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:24.593 16:53:14 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:24.593 16:53:14 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:24.593 No valid GPT data, bailing 00:04:24.593 16:53:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:24.593 16:53:14 -- scripts/common.sh@391 -- # pt= 00:04:24.593 16:53:14 -- scripts/common.sh@392 -- # return 1 00:04:24.593 16:53:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:24.593 1+0 records in 00:04:24.593 1+0 records out 00:04:24.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377773 s, 278 MB/s 00:04:24.593 16:53:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.593 16:53:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:24.593 16:53:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:24.593 16:53:14 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:24.593 16:53:14 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:24.852 No valid GPT data, bailing 00:04:24.852 16:53:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:24.852 16:53:14 -- scripts/common.sh@391 -- # pt= 00:04:24.852 16:53:14 -- scripts/common.sh@392 -- # return 1 00:04:24.852 16:53:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:24.852 1+0 records in 00:04:24.852 1+0 records out 00:04:24.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436819 s, 240 MB/s 00:04:24.852 16:53:14 -- spdk/autotest.sh@118 -- # sync 00:04:24.852 16:53:15 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:24.852 16:53:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:24.852 16:53:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:26.852 16:53:16 -- spdk/autotest.sh@124 -- # uname -s 00:04:26.852 16:53:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:26.852 16:53:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:26.852 16:53:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.852 16:53:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.852 16:53:16 -- common/autotest_common.sh@10 -- # set +x 00:04:26.852 ************************************ 00:04:26.852 START TEST setup.sh 00:04:26.852 ************************************ 00:04:26.852 16:53:16 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:26.852 * Looking for test storage... 00:04:26.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:26.852 16:53:16 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:26.852 16:53:16 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:26.852 16:53:16 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:26.852 16:53:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.852 16:53:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.852 16:53:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.852 ************************************ 00:04:26.852 START TEST acl 00:04:26.852 ************************************ 00:04:26.852 16:53:16 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:26.852 * Looking for test storage... 00:04:26.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:26.852 16:53:16 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:26.852 16:53:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:26.852 16:53:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:26.853 16:53:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.853 16:53:16 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:26.853 16:53:16 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:26.853 16:53:16 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:26.853 16:53:16 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:26.853 16:53:16 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:26.853 16:53:16 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.853 16:53:16 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.421 16:53:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:27.421 16:53:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:27.421 16:53:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.421 16:53:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:27.421 16:53:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.421 16:53:17 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:27.988 16:53:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:27.988 16:53:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.988 16:53:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.988 Hugepages 00:04:27.988 node hugesize free / total 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.248 00:04:28.248 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:28.248 16:53:18 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:28.248 16:53:18 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.248 16:53:18 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.248 16:53:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:28.248 ************************************ 00:04:28.248 START TEST denied 00:04:28.248 ************************************ 00:04:28.248 16:53:18 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:28.248 16:53:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:28.248 16:53:18 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:28.248 16:53:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:28.248 16:53:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.507 16:53:18 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.442 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:29.442 16:53:19 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:29.442 16:53:19 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:29.442 16:53:19 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:29.442 16:53:19 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:29.442 16:53:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:29.442 16:53:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:29.442 16:53:19 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:29.442 16:53:19 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:29.442 16:53:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.442 16:53:19 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.701 00:04:29.701 real 0m1.435s 00:04:29.701 user 0m0.575s 00:04:29.701 sys 0m0.796s 00:04:29.701 16:53:19 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.701 16:53:19 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:29.701 ************************************ 00:04:29.701 END TEST denied 00:04:29.701 ************************************ 00:04:29.960 16:53:20 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:29.960 16:53:20 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:29.960 16:53:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.960 16:53:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.960 16:53:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:29.960 ************************************ 00:04:29.960 START TEST allowed 00:04:29.960 ************************************ 00:04:29.960 16:53:20 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:29.960 16:53:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:29.960 16:53:20 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:29.960 16:53:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:29.960 16:53:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.960 16:53:20 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:30.528 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.528 16:53:20 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:30.528 16:53:20 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:30.528 16:53:20 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:30.528 16:53:20 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:30.528 16:53:20 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:30.528 16:53:20 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:30.528 16:53:20 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:30.528 16:53:20 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:30.528 16:53:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.528 16:53:20 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.466 00:04:31.466 real 0m1.507s 00:04:31.466 user 0m0.693s 00:04:31.466 sys 0m0.808s 00:04:31.466 16:53:21 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.466 16:53:21 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:31.466 ************************************ 00:04:31.466 END TEST allowed 00:04:31.466 ************************************ 00:04:31.466 16:53:21 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:31.466 00:04:31.466 real 0m4.709s 00:04:31.466 user 0m2.075s 00:04:31.466 sys 0m2.569s 00:04:31.466 16:53:21 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.466 16:53:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:31.466 ************************************ 00:04:31.466 END TEST acl 00:04:31.466 ************************************ 00:04:31.466 16:53:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:31.466 16:53:21 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:31.466 16:53:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.466 16:53:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.466 16:53:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:31.466 ************************************ 00:04:31.466 START TEST hugepages 00:04:31.466 ************************************ 00:04:31.466 16:53:21 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:31.466 * Looking for test storage... 00:04:31.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.466 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6029736 kB' 'MemAvailable: 7424604 kB' 'Buffers: 2436 kB' 'Cached: 1609148 kB' 'SwapCached: 0 kB' 'Active: 435492 kB' 'Inactive: 1280236 kB' 'Active(anon): 114632 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 105828 kB' 'Mapped: 48740 kB' 'Shmem: 10488 kB' 'KReclaimable: 61420 kB' 'Slab: 132624 kB' 'SReclaimable: 61420 kB' 'SUnreclaim: 71204 kB' 'KernelStack: 6252 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 335452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.467 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.468 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:31.469 16:53:21 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:31.469 16:53:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.469 16:53:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.469 16:53:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.469 ************************************ 00:04:31.469 START TEST default_setup 00:04:31.469 ************************************ 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.469 16:53:21 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.409 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.409 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101776 kB' 'MemAvailable: 9496492 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452964 kB' 'Inactive: 1280236 kB' 'Active(anon): 132104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123284 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132276 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71156 kB' 'KernelStack: 6256 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.409 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.410 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101892 kB' 'MemAvailable: 9496608 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452412 kB' 'Inactive: 1280236 kB' 'Active(anon): 131552 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122688 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132276 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71156 kB' 'KernelStack: 6224 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.411 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.412 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101892 kB' 'MemAvailable: 9496620 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452244 kB' 'Inactive: 1280248 kB' 'Active(anon): 131384 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122516 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132276 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71156 kB' 'KernelStack: 6224 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.413 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.414 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.415 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:32.416 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:32.677 nr_hugepages=1024 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.677 resv_hugepages=0 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.677 surplus_hugepages=0 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.677 anon_hugepages=0 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101892 kB' 'MemAvailable: 9496620 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452540 kB' 'Inactive: 1280248 kB' 'Active(anon): 131680 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122816 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132276 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71156 kB' 'KernelStack: 6240 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.677 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.678 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.679 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101892 kB' 'MemUsed: 4140084 kB' 'SwapCached: 0 kB' 'Active: 452432 kB' 'Inactive: 1280248 kB' 'Active(anon): 131572 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1611572 kB' 'Mapped: 48680 kB' 'AnonPages: 122756 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61120 kB' 'Slab: 132272 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.680 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.681 node0=1024 expecting 1024 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.681 00:04:32.681 real 0m1.022s 00:04:32.681 user 0m0.477s 00:04:32.681 sys 0m0.455s 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.681 16:53:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:32.681 ************************************ 00:04:32.681 END TEST default_setup 00:04:32.681 ************************************ 00:04:32.681 16:53:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:32.681 16:53:22 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:32.681 16:53:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.681 16:53:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.681 16:53:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.681 ************************************ 00:04:32.681 START TEST per_node_1G_alloc 00:04:32.681 ************************************ 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:32.681 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.682 16:53:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.943 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.943 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.943 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9153660 kB' 'MemAvailable: 10548388 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452928 kB' 'Inactive: 1280248 kB' 'Active(anon): 132068 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123140 kB' 'Mapped: 48772 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132320 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71200 kB' 'KernelStack: 6212 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.944 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.945 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9154704 kB' 'MemAvailable: 10549432 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452532 kB' 'Inactive: 1280248 kB' 'Active(anon): 131672 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122784 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132324 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71204 kB' 'KernelStack: 6240 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.210 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.211 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9154704 kB' 'MemAvailable: 10549432 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452500 kB' 'Inactive: 1280248 kB' 'Active(anon): 131640 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122780 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132324 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71204 kB' 'KernelStack: 6240 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.212 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.213 nr_hugepages=512 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:33.213 resv_hugepages=0 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.213 surplus_hugepages=0 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.213 anon_hugepages=0 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9154704 kB' 'MemAvailable: 10549432 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452508 kB' 'Inactive: 1280248 kB' 'Active(anon): 131648 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132324 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71204 kB' 'KernelStack: 6240 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.213 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.214 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9155052 kB' 'MemUsed: 3086924 kB' 'SwapCached: 0 kB' 'Active: 452432 kB' 'Inactive: 1280248 kB' 'Active(anon): 131572 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1611572 kB' 'Mapped: 48680 kB' 'AnonPages: 122672 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61120 kB' 'Slab: 132324 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.215 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.216 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.216 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.216 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.216 node0=512 expecting 512 00:04:33.216 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:33.216 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:33.216 00:04:33.216 real 0m0.544s 00:04:33.216 user 0m0.275s 00:04:33.216 sys 0m0.300s 00:04:33.216 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.216 16:53:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:33.216 ************************************ 00:04:33.216 END TEST per_node_1G_alloc 00:04:33.216 ************************************ 00:04:33.216 16:53:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:33.216 16:53:23 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:33.216 16:53:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.216 16:53:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.216 16:53:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.216 ************************************ 00:04:33.216 START TEST even_2G_alloc 00:04:33.216 ************************************ 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.216 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.473 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.473 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.736 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101752 kB' 'MemAvailable: 9496480 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452544 kB' 'Inactive: 1280248 kB' 'Active(anon): 131684 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132380 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71260 kB' 'KernelStack: 6196 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.737 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101752 kB' 'MemAvailable: 9496480 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452212 kB' 'Inactive: 1280248 kB' 'Active(anon): 131352 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122772 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132376 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6240 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.738 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.739 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101752 kB' 'MemAvailable: 9496480 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452192 kB' 'Inactive: 1280248 kB' 'Active(anon): 131332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122732 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132376 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6224 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.740 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.741 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.742 nr_hugepages=1024 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:33.742 resv_hugepages=0 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.742 surplus_hugepages=0 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.742 anon_hugepages=0 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101752 kB' 'MemAvailable: 9496480 kB' 'Buffers: 2436 kB' 'Cached: 1609136 kB' 'SwapCached: 0 kB' 'Active: 452240 kB' 'Inactive: 1280248 kB' 'Active(anon): 131380 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122776 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132376 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6240 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.742 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.743 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101752 kB' 'MemUsed: 4140224 kB' 'SwapCached: 0 kB' 'Active: 452448 kB' 'Inactive: 1280248 kB' 'Active(anon): 131588 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280248 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1611572 kB' 'Mapped: 48684 kB' 'AnonPages: 122700 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61120 kB' 'Slab: 132376 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.744 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.745 node0=1024 expecting 1024 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:33.745 00:04:33.745 real 0m0.545s 00:04:33.745 user 0m0.272s 00:04:33.745 sys 0m0.283s 00:04:33.745 ************************************ 00:04:33.745 END TEST even_2G_alloc 00:04:33.745 ************************************ 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.745 16:53:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:33.745 16:53:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:33.745 16:53:23 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:33.745 16:53:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.745 16:53:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.745 16:53:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.745 ************************************ 00:04:33.745 START TEST odd_alloc 00:04:33.745 ************************************ 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:33.745 16:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:33.745 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:33.745 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.745 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.315 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.315 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.315 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101896 kB' 'MemAvailable: 9496628 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452756 kB' 'Inactive: 1280252 kB' 'Active(anon): 131896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123016 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132384 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71264 kB' 'KernelStack: 6260 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.316 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101884 kB' 'MemAvailable: 9496616 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452684 kB' 'Inactive: 1280252 kB' 'Active(anon): 131824 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122940 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132384 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71264 kB' 'KernelStack: 6240 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.317 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.318 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8101884 kB' 'MemAvailable: 9496616 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452236 kB' 'Inactive: 1280252 kB' 'Active(anon): 131376 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132388 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71268 kB' 'KernelStack: 6240 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.319 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:34.320 nr_hugepages=1025 00:04:34.320 resv_hugepages=0 00:04:34.320 surplus_hugepages=0 00:04:34.320 anon_hugepages=0 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.320 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8102136 kB' 'MemAvailable: 9496868 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452200 kB' 'Inactive: 1280252 kB' 'Active(anon): 131340 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122708 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132388 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71268 kB' 'KernelStack: 6224 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.321 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8102768 kB' 'MemUsed: 4139208 kB' 'SwapCached: 0 kB' 'Active: 452500 kB' 'Inactive: 1280252 kB' 'Active(anon): 131640 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1611576 kB' 'Mapped: 48684 kB' 'AnonPages: 122512 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61120 kB' 'Slab: 132388 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.322 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.323 node0=1025 expecting 1025 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:34.323 00:04:34.323 real 0m0.551s 00:04:34.323 user 0m0.276s 00:04:34.323 sys 0m0.273s 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.323 ************************************ 00:04:34.323 END TEST odd_alloc 00:04:34.323 ************************************ 00:04:34.323 16:53:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:34.323 16:53:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:34.323 16:53:24 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:34.323 16:53:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.323 16:53:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.324 16:53:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.324 ************************************ 00:04:34.324 START TEST custom_alloc 00:04:34.324 ************************************ 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.324 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.896 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.896 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9153064 kB' 'MemAvailable: 10547796 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452712 kB' 'Inactive: 1280252 kB' 'Active(anon): 131852 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123044 kB' 'Mapped: 48780 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132324 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71204 kB' 'KernelStack: 6212 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.896 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.897 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:34.898 16:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9153316 kB' 'MemAvailable: 10548048 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452364 kB' 'Inactive: 1280252 kB' 'Active(anon): 131504 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122904 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132288 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71168 kB' 'KernelStack: 6180 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.898 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.899 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9153316 kB' 'MemAvailable: 10548048 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452240 kB' 'Inactive: 1280252 kB' 'Active(anon): 131380 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122816 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132284 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71164 kB' 'KernelStack: 6240 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.900 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:34.901 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:34.901 nr_hugepages=512 00:04:34.901 resv_hugepages=0 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.902 surplus_hugepages=0 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.902 anon_hugepages=0 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9153316 kB' 'MemAvailable: 10548048 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452548 kB' 'Inactive: 1280252 kB' 'Active(anon): 131688 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122936 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132284 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71164 kB' 'KernelStack: 6256 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.902 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9153316 kB' 'MemUsed: 3088660 kB' 'SwapCached: 0 kB' 'Active: 452488 kB' 'Inactive: 1280252 kB' 'Active(anon): 131628 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1611576 kB' 'Mapped: 48684 kB' 'AnonPages: 122552 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61120 kB' 'Slab: 132284 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.903 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.904 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:34.905 node0=512 expecting 512 00:04:34.905 ************************************ 00:04:34.905 END TEST custom_alloc 00:04:34.905 ************************************ 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:34.905 00:04:34.905 real 0m0.584s 00:04:34.905 user 0m0.297s 00:04:34.905 sys 0m0.287s 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.905 16:53:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.164 16:53:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:35.164 16:53:25 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:35.164 16:53:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.164 16:53:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.164 16:53:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.164 ************************************ 00:04:35.164 START TEST no_shrink_alloc 00:04:35.164 ************************************ 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.164 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.426 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.426 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.426 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.426 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8107196 kB' 'MemAvailable: 9501928 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452656 kB' 'Inactive: 1280252 kB' 'Active(anon): 131796 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123020 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132288 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71168 kB' 'KernelStack: 6228 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.427 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8107196 kB' 'MemAvailable: 9501928 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452600 kB' 'Inactive: 1280252 kB' 'Active(anon): 131740 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122912 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132280 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71160 kB' 'KernelStack: 6224 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.428 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.429 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8107196 kB' 'MemAvailable: 9501928 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452268 kB' 'Inactive: 1280252 kB' 'Active(anon): 131408 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122836 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132280 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71160 kB' 'KernelStack: 6240 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.430 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.431 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.693 nr_hugepages=1024 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.693 resv_hugepages=0 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.693 surplus_hugepages=0 00:04:35.693 anon_hugepages=0 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8107196 kB' 'MemAvailable: 9501928 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 452192 kB' 'Inactive: 1280252 kB' 'Active(anon): 131332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122728 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 61120 kB' 'Slab: 132268 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71148 kB' 'KernelStack: 6224 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.693 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.694 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8107196 kB' 'MemUsed: 4134780 kB' 'SwapCached: 0 kB' 'Active: 452264 kB' 'Inactive: 1280252 kB' 'Active(anon): 131404 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1611576 kB' 'Mapped: 48684 kB' 'AnonPages: 122828 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61120 kB' 'Slab: 132268 kB' 'SReclaimable: 61120 kB' 'SUnreclaim: 71148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.695 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:35.696 node0=1024 expecting 1024 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.696 16:53:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.958 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.958 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.958 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109268 kB' 'MemAvailable: 9504000 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 448544 kB' 'Inactive: 1280252 kB' 'Active(anon): 127684 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118788 kB' 'Mapped: 47872 kB' 'Shmem: 10464 kB' 'KReclaimable: 61116 kB' 'Slab: 132108 kB' 'SReclaimable: 61116 kB' 'SUnreclaim: 70992 kB' 'KernelStack: 6180 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.958 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.959 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109268 kB' 'MemAvailable: 9504000 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 447868 kB' 'Inactive: 1280252 kB' 'Active(anon): 127008 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118124 kB' 'Mapped: 47944 kB' 'Shmem: 10464 kB' 'KReclaimable: 61116 kB' 'Slab: 132100 kB' 'SReclaimable: 61116 kB' 'SUnreclaim: 70984 kB' 'KernelStack: 6128 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.960 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.961 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109944 kB' 'MemAvailable: 9504676 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 447848 kB' 'Inactive: 1280252 kB' 'Active(anon): 126988 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118116 kB' 'Mapped: 47944 kB' 'Shmem: 10464 kB' 'KReclaimable: 61116 kB' 'Slab: 132100 kB' 'SReclaimable: 61116 kB' 'SUnreclaim: 70984 kB' 'KernelStack: 6128 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.962 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.963 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:36.227 nr_hugepages=1024 00:04:36.227 resv_hugepages=0 00:04:36.227 surplus_hugepages=0 00:04:36.227 anon_hugepages=0 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111344 kB' 'MemAvailable: 9506076 kB' 'Buffers: 2436 kB' 'Cached: 1609140 kB' 'SwapCached: 0 kB' 'Active: 447552 kB' 'Inactive: 1280252 kB' 'Active(anon): 126692 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118060 kB' 'Mapped: 47944 kB' 'Shmem: 10464 kB' 'KReclaimable: 61116 kB' 'Slab: 132100 kB' 'SReclaimable: 61116 kB' 'SUnreclaim: 70984 kB' 'KernelStack: 6112 kB' 'PageTables: 3684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.227 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.228 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113052 kB' 'MemUsed: 4128924 kB' 'SwapCached: 0 kB' 'Active: 447840 kB' 'Inactive: 1280252 kB' 'Active(anon): 126980 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1280252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1611576 kB' 'Mapped: 47944 kB' 'AnonPages: 118108 kB' 'Shmem: 10464 kB' 'KernelStack: 6128 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61116 kB' 'Slab: 132100 kB' 'SReclaimable: 61116 kB' 'SUnreclaim: 70984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.229 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.230 node0=1024 expecting 1024 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:36.230 00:04:36.230 real 0m1.117s 00:04:36.230 user 0m0.560s 00:04:36.230 sys 0m0.557s 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.230 ************************************ 00:04:36.230 END TEST no_shrink_alloc 00:04:36.230 ************************************ 00:04:36.230 16:53:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:36.230 16:53:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:36.230 16:53:26 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:36.230 16:53:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:36.230 16:53:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.230 16:53:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.230 16:53:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.230 16:53:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.230 16:53:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.230 16:53:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:36.230 16:53:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:36.230 ************************************ 00:04:36.230 END TEST hugepages 00:04:36.230 ************************************ 00:04:36.230 00:04:36.230 real 0m4.781s 00:04:36.230 user 0m2.311s 00:04:36.230 sys 0m2.404s 00:04:36.230 16:53:26 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.230 16:53:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.230 16:53:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:36.230 16:53:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:36.230 16:53:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.230 16:53:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.230 16:53:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.230 ************************************ 00:04:36.230 START TEST driver 00:04:36.230 ************************************ 00:04:36.230 16:53:26 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:36.230 * Looking for test storage... 00:04:36.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:36.230 16:53:26 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:36.230 16:53:26 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.230 16:53:26 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.797 16:53:27 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:36.797 16:53:27 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.797 16:53:27 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.797 16:53:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.797 ************************************ 00:04:36.797 START TEST guess_driver 00:04:36.797 ************************************ 00:04:36.797 16:53:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:36.797 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:36.797 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:36.797 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:36.798 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:36.798 Looking for driver=uio_pci_generic 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.798 16:53:27 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.734 16:53:27 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.300 00:04:38.300 real 0m1.400s 00:04:38.300 user 0m0.531s 00:04:38.300 sys 0m0.865s 00:04:38.300 16:53:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.300 ************************************ 00:04:38.300 END TEST guess_driver 00:04:38.300 ************************************ 00:04:38.300 16:53:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.300 16:53:28 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:38.300 ************************************ 00:04:38.300 END TEST driver 00:04:38.300 ************************************ 00:04:38.300 00:04:38.300 real 0m2.067s 00:04:38.300 user 0m0.758s 00:04:38.300 sys 0m1.367s 00:04:38.300 16:53:28 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.300 16:53:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.300 16:53:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:38.300 16:53:28 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:38.300 16:53:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.300 16:53:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.300 16:53:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.300 ************************************ 00:04:38.300 START TEST devices 00:04:38.300 ************************************ 00:04:38.300 16:53:28 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:38.559 * Looking for test storage... 00:04:38.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:38.559 16:53:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:38.559 16:53:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:38.559 16:53:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.559 16:53:28 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.128 16:53:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:39.128 16:53:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:39.128 16:53:29 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:39.128 No valid GPT data, bailing 00:04:39.128 16:53:29 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.128 16:53:29 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:39.128 16:53:29 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:39.128 16:53:29 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:39.128 16:53:29 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:39.128 16:53:29 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:39.128 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:39.128 16:53:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:39.128 16:53:29 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:39.394 No valid GPT data, bailing 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:39.394 16:53:29 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:39.394 16:53:29 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:39.394 16:53:29 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:39.394 No valid GPT data, bailing 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:39.394 16:53:29 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:39.394 16:53:29 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:39.394 16:53:29 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:39.394 No valid GPT data, bailing 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:39.394 16:53:29 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:39.394 16:53:29 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:39.394 16:53:29 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:39.394 16:53:29 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:39.394 16:53:29 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:39.394 16:53:29 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.394 16:53:29 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.394 16:53:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.394 ************************************ 00:04:39.394 START TEST nvme_mount 00:04:39.394 ************************************ 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:39.394 16:53:29 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:40.763 Creating new GPT entries in memory. 00:04:40.763 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:40.763 other utilities. 00:04:40.764 16:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:40.764 16:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.764 16:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.764 16:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.764 16:53:30 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:41.700 Creating new GPT entries in memory. 00:04:41.700 The operation has completed successfully. 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56881 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.700 16:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:41.959 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.959 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.217 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:42.217 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:42.217 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:42.217 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.217 16:53:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.476 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.476 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:42.476 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.476 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.476 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.476 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.734 16:53:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.992 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.992 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:42.992 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.992 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.992 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.992 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.251 16:53:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:43.509 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.509 ************************************ 00:04:43.509 END TEST nvme_mount 00:04:43.509 ************************************ 00:04:43.509 00:04:43.509 real 0m3.917s 00:04:43.509 user 0m0.715s 00:04:43.509 sys 0m0.955s 00:04:43.509 16:53:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.509 16:53:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:43.509 16:53:33 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:43.509 16:53:33 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:43.509 16:53:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.509 16:53:33 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.509 16:53:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:43.509 ************************************ 00:04:43.509 START TEST dm_mount 00:04:43.509 ************************************ 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:43.509 16:53:33 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:44.445 Creating new GPT entries in memory. 00:04:44.445 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:44.445 other utilities. 00:04:44.445 16:53:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:44.445 16:53:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.445 16:53:34 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.445 16:53:34 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.445 16:53:34 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:45.379 Creating new GPT entries in memory. 00:04:45.379 The operation has completed successfully. 00:04:45.379 16:53:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.379 16:53:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.379 16:53:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.379 16:53:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.379 16:53:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:46.761 The operation has completed successfully. 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57313 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.761 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.762 16:53:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.022 16:53:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:47.281 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:47.281 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:47.281 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:47.281 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.281 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:47.281 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:47.540 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:47.540 00:04:47.540 real 0m4.200s 00:04:47.540 user 0m0.449s 00:04:47.540 sys 0m0.706s 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.540 ************************************ 00:04:47.540 END TEST dm_mount 00:04:47.540 ************************************ 00:04:47.540 16:53:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:47.540 16:53:37 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:47.541 16:53:37 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:47.541 16:53:37 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:47.541 16:53:37 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.806 16:53:37 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.806 16:53:37 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:47.806 16:53:37 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.806 16:53:37 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.071 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:48.071 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:48.071 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.071 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.071 16:53:38 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:48.071 16:53:38 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:48.071 16:53:38 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.071 16:53:38 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.071 16:53:38 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.071 16:53:38 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.071 16:53:38 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:48.071 00:04:48.071 real 0m9.592s 00:04:48.071 user 0m1.794s 00:04:48.071 sys 0m2.216s 00:04:48.071 16:53:38 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.071 ************************************ 00:04:48.071 END TEST devices 00:04:48.071 16:53:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:48.071 ************************************ 00:04:48.071 16:53:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:48.071 ************************************ 00:04:48.071 END TEST setup.sh 00:04:48.071 ************************************ 00:04:48.071 00:04:48.071 real 0m21.424s 00:04:48.071 user 0m7.029s 00:04:48.071 sys 0m8.734s 00:04:48.071 16:53:38 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.071 16:53:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.071 16:53:38 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.071 16:53:38 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:48.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.637 Hugepages 00:04:48.637 node hugesize free / total 00:04:48.637 node0 1048576kB 0 / 0 00:04:48.637 node0 2048kB 2048 / 2048 00:04:48.637 00:04:48.637 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:48.637 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:48.897 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:48.897 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:48.897 16:53:39 -- spdk/autotest.sh@130 -- # uname -s 00:04:48.897 16:53:39 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:48.897 16:53:39 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:48.897 16:53:39 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.722 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.722 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.722 16:53:39 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:51.099 16:53:40 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:51.099 16:53:40 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:51.099 16:53:40 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:51.099 16:53:40 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:51.099 16:53:40 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:51.099 16:53:40 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:51.099 16:53:40 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:51.099 16:53:40 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:51.099 16:53:40 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:51.099 16:53:41 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:51.099 16:53:41 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:51.099 16:53:41 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.099 Waiting for block devices as requested 00:04:51.358 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:51.358 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:51.358 16:53:41 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:51.358 16:53:41 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:51.359 16:53:41 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:51.359 16:53:41 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:51.359 16:53:41 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:51.359 16:53:41 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:51.359 16:53:41 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:51.359 16:53:41 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:51.359 16:53:41 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:51.359 16:53:41 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:51.359 16:53:41 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:51.359 16:53:41 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:51.359 16:53:41 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:51.359 16:53:41 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:51.359 16:53:41 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:51.359 16:53:41 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:51.359 16:53:41 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:51.359 16:53:41 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:51.359 16:53:41 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:51.359 16:53:41 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:51.359 16:53:41 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:51.359 16:53:41 -- common/autotest_common.sh@1557 -- # continue 00:04:51.359 16:53:41 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:51.359 16:53:41 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:51.359 16:53:41 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:51.359 16:53:41 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:51.359 16:53:41 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:51.359 16:53:41 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:51.359 16:53:41 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:51.359 16:53:41 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:51.359 16:53:41 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:51.359 16:53:41 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:51.359 16:53:41 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:51.359 16:53:41 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:51.359 16:53:41 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:51.359 16:53:41 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:51.359 16:53:41 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:51.359 16:53:41 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:51.359 16:53:41 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:51.359 16:53:41 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:51.359 16:53:41 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:51.359 16:53:41 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:51.359 16:53:41 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:51.359 16:53:41 -- common/autotest_common.sh@1557 -- # continue 00:04:51.359 16:53:41 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:51.359 16:53:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.359 16:53:41 -- common/autotest_common.sh@10 -- # set +x 00:04:51.618 16:53:41 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:51.618 16:53:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.618 16:53:41 -- common/autotest_common.sh@10 -- # set +x 00:04:51.618 16:53:41 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.185 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.445 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.445 16:53:42 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:52.445 16:53:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:52.445 16:53:42 -- common/autotest_common.sh@10 -- # set +x 00:04:52.445 16:53:42 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:52.445 16:53:42 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:52.445 16:53:42 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:52.445 16:53:42 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:52.445 16:53:42 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:52.445 16:53:42 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:52.445 16:53:42 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:52.445 16:53:42 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:52.445 16:53:42 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.445 16:53:42 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:52.445 16:53:42 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:52.445 16:53:42 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:52.445 16:53:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:52.445 16:53:42 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:52.445 16:53:42 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:52.445 16:53:42 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:52.445 16:53:42 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:52.445 16:53:42 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:52.445 16:53:42 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:52.445 16:53:42 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:52.445 16:53:42 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:52.445 16:53:42 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:52.445 16:53:42 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:52.445 16:53:42 -- common/autotest_common.sh@1593 -- # return 0 00:04:52.445 16:53:42 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:52.445 16:53:42 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:52.445 16:53:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:52.445 16:53:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:52.445 16:53:42 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:52.445 16:53:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.445 16:53:42 -- common/autotest_common.sh@10 -- # set +x 00:04:52.445 16:53:42 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:52.445 16:53:42 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:52.445 16:53:42 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:52.445 16:53:42 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:52.445 16:53:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.445 16:53:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.445 16:53:42 -- common/autotest_common.sh@10 -- # set +x 00:04:52.445 ************************************ 00:04:52.445 START TEST env 00:04:52.445 ************************************ 00:04:52.445 16:53:42 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:52.705 * Looking for test storage... 00:04:52.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:52.705 16:53:42 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:52.705 16:53:42 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.705 16:53:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.705 16:53:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.705 ************************************ 00:04:52.705 START TEST env_memory 00:04:52.705 ************************************ 00:04:52.705 16:53:42 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:52.705 00:04:52.705 00:04:52.705 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.705 http://cunit.sourceforge.net/ 00:04:52.705 00:04:52.705 00:04:52.705 Suite: memory 00:04:52.705 Test: alloc and free memory map ...[2024-07-15 16:53:42.858994] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:52.705 passed 00:04:52.705 Test: mem map translation ...[2024-07-15 16:53:42.891119] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:52.705 [2024-07-15 16:53:42.891455] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:52.705 [2024-07-15 16:53:42.891783] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:52.705 [2024-07-15 16:53:42.892029] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:52.705 passed 00:04:52.705 Test: mem map registration ...[2024-07-15 16:53:42.956539] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:52.705 [2024-07-15 16:53:42.956846] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:52.705 passed 00:04:52.964 Test: mem map adjacent registrations ...passed 00:04:52.964 00:04:52.964 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.964 suites 1 1 n/a 0 0 00:04:52.964 tests 4 4 4 0 0 00:04:52.964 asserts 152 152 152 0 n/a 00:04:52.964 00:04:52.964 Elapsed time = 0.222 seconds 00:04:52.964 00:04:52.964 real 0m0.242s 00:04:52.964 user 0m0.222s 00:04:52.964 sys 0m0.014s 00:04:52.964 16:53:43 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.964 16:53:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:52.964 ************************************ 00:04:52.964 END TEST env_memory 00:04:52.964 ************************************ 00:04:52.964 16:53:43 env -- common/autotest_common.sh@1142 -- # return 0 00:04:52.964 16:53:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:52.964 16:53:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.964 16:53:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.964 16:53:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.964 ************************************ 00:04:52.964 START TEST env_vtophys 00:04:52.964 ************************************ 00:04:52.964 16:53:43 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:52.964 EAL: lib.eal log level changed from notice to debug 00:04:52.964 EAL: Detected lcore 0 as core 0 on socket 0 00:04:52.964 EAL: Detected lcore 1 as core 0 on socket 0 00:04:52.964 EAL: Detected lcore 2 as core 0 on socket 0 00:04:52.964 EAL: Detected lcore 3 as core 0 on socket 0 00:04:52.964 EAL: Detected lcore 4 as core 0 on socket 0 00:04:52.964 EAL: Detected lcore 5 as core 0 on socket 0 00:04:52.964 EAL: Detected lcore 6 as core 0 on socket 0 00:04:52.964 EAL: Detected lcore 7 as core 0 on socket 0 00:04:52.964 EAL: Detected lcore 8 as core 0 on socket 0 00:04:52.964 EAL: Detected lcore 9 as core 0 on socket 0 00:04:52.964 EAL: Maximum logical cores by configuration: 128 00:04:52.964 EAL: Detected CPU lcores: 10 00:04:52.964 EAL: Detected NUMA nodes: 1 00:04:52.964 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:52.964 EAL: Detected shared linkage of DPDK 00:04:52.964 EAL: No shared files mode enabled, IPC will be disabled 00:04:52.964 EAL: Selected IOVA mode 'PA' 00:04:52.964 EAL: Probing VFIO support... 00:04:52.964 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:52.964 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:52.964 EAL: Ask a virtual area of 0x2e000 bytes 00:04:52.964 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:52.964 EAL: Setting up physically contiguous memory... 00:04:52.964 EAL: Setting maximum number of open files to 524288 00:04:52.964 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:52.964 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:52.964 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.964 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:52.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.964 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.964 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:52.964 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:52.964 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.964 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:52.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.964 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.965 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:52.965 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:52.965 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.965 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:52.965 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.965 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.965 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:52.965 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:52.965 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.965 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:52.965 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.965 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.965 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:52.965 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:52.965 EAL: Hugepages will be freed exactly as allocated. 00:04:52.965 EAL: No shared files mode enabled, IPC is disabled 00:04:52.965 EAL: No shared files mode enabled, IPC is disabled 00:04:52.965 EAL: TSC frequency is ~2200000 KHz 00:04:52.965 EAL: Main lcore 0 is ready (tid=7f393cde5a00;cpuset=[0]) 00:04:52.965 EAL: Trying to obtain current memory policy. 00:04:52.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.965 EAL: Restoring previous memory policy: 0 00:04:52.965 EAL: request: mp_malloc_sync 00:04:52.965 EAL: No shared files mode enabled, IPC is disabled 00:04:52.965 EAL: Heap on socket 0 was expanded by 2MB 00:04:52.965 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:52.965 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:52.965 EAL: Mem event callback 'spdk:(nil)' registered 00:04:52.965 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:52.965 00:04:52.965 00:04:52.965 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.965 http://cunit.sourceforge.net/ 00:04:52.965 00:04:52.965 00:04:52.965 Suite: components_suite 00:04:52.965 Test: vtophys_malloc_test ...passed 00:04:52.965 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:52.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.965 EAL: Restoring previous memory policy: 4 00:04:52.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.965 EAL: request: mp_malloc_sync 00:04:52.965 EAL: No shared files mode enabled, IPC is disabled 00:04:52.965 EAL: Heap on socket 0 was expanded by 4MB 00:04:52.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.965 EAL: request: mp_malloc_sync 00:04:52.965 EAL: No shared files mode enabled, IPC is disabled 00:04:52.965 EAL: Heap on socket 0 was shrunk by 4MB 00:04:52.965 EAL: Trying to obtain current memory policy. 00:04:52.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.965 EAL: Restoring previous memory policy: 4 00:04:52.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.965 EAL: request: mp_malloc_sync 00:04:52.965 EAL: No shared files mode enabled, IPC is disabled 00:04:52.965 EAL: Heap on socket 0 was expanded by 6MB 00:04:52.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.965 EAL: request: mp_malloc_sync 00:04:52.965 EAL: No shared files mode enabled, IPC is disabled 00:04:52.965 EAL: Heap on socket 0 was shrunk by 6MB 00:04:52.965 EAL: Trying to obtain current memory policy. 00:04:52.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.965 EAL: Restoring previous memory policy: 4 00:04:52.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.965 EAL: request: mp_malloc_sync 00:04:52.965 EAL: No shared files mode enabled, IPC is disabled 00:04:52.965 EAL: Heap on socket 0 was expanded by 10MB 00:04:53.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.224 EAL: request: mp_malloc_sync 00:04:53.224 EAL: No shared files mode enabled, IPC is disabled 00:04:53.224 EAL: Heap on socket 0 was shrunk by 10MB 00:04:53.224 EAL: Trying to obtain current memory policy. 00:04:53.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.224 EAL: Restoring previous memory policy: 4 00:04:53.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.224 EAL: request: mp_malloc_sync 00:04:53.224 EAL: No shared files mode enabled, IPC is disabled 00:04:53.224 EAL: Heap on socket 0 was expanded by 18MB 00:04:53.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.224 EAL: request: mp_malloc_sync 00:04:53.224 EAL: No shared files mode enabled, IPC is disabled 00:04:53.224 EAL: Heap on socket 0 was shrunk by 18MB 00:04:53.224 EAL: Trying to obtain current memory policy. 00:04:53.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.224 EAL: Restoring previous memory policy: 4 00:04:53.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.224 EAL: request: mp_malloc_sync 00:04:53.224 EAL: No shared files mode enabled, IPC is disabled 00:04:53.224 EAL: Heap on socket 0 was expanded by 34MB 00:04:53.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.224 EAL: request: mp_malloc_sync 00:04:53.224 EAL: No shared files mode enabled, IPC is disabled 00:04:53.224 EAL: Heap on socket 0 was shrunk by 34MB 00:04:53.224 EAL: Trying to obtain current memory policy. 00:04:53.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.224 EAL: Restoring previous memory policy: 4 00:04:53.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.224 EAL: request: mp_malloc_sync 00:04:53.224 EAL: No shared files mode enabled, IPC is disabled 00:04:53.224 EAL: Heap on socket 0 was expanded by 66MB 00:04:53.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.224 EAL: request: mp_malloc_sync 00:04:53.224 EAL: No shared files mode enabled, IPC is disabled 00:04:53.224 EAL: Heap on socket 0 was shrunk by 66MB 00:04:53.224 EAL: Trying to obtain current memory policy. 00:04:53.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.224 EAL: Restoring previous memory policy: 4 00:04:53.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.224 EAL: request: mp_malloc_sync 00:04:53.224 EAL: No shared files mode enabled, IPC is disabled 00:04:53.224 EAL: Heap on socket 0 was expanded by 130MB 00:04:53.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.224 EAL: request: mp_malloc_sync 00:04:53.224 EAL: No shared files mode enabled, IPC is disabled 00:04:53.224 EAL: Heap on socket 0 was shrunk by 130MB 00:04:53.224 EAL: Trying to obtain current memory policy. 00:04:53.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.224 EAL: Restoring previous memory policy: 4 00:04:53.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.224 EAL: request: mp_malloc_sync 00:04:53.224 EAL: No shared files mode enabled, IPC is disabled 00:04:53.224 EAL: Heap on socket 0 was expanded by 258MB 00:04:53.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.483 EAL: request: mp_malloc_sync 00:04:53.483 EAL: No shared files mode enabled, IPC is disabled 00:04:53.483 EAL: Heap on socket 0 was shrunk by 258MB 00:04:53.483 EAL: Trying to obtain current memory policy. 00:04:53.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.483 EAL: Restoring previous memory policy: 4 00:04:53.483 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.483 EAL: request: mp_malloc_sync 00:04:53.483 EAL: No shared files mode enabled, IPC is disabled 00:04:53.483 EAL: Heap on socket 0 was expanded by 514MB 00:04:53.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.742 EAL: request: mp_malloc_sync 00:04:53.742 EAL: No shared files mode enabled, IPC is disabled 00:04:53.742 EAL: Heap on socket 0 was shrunk by 514MB 00:04:53.742 EAL: Trying to obtain current memory policy. 00:04:53.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.000 EAL: Restoring previous memory policy: 4 00:04:54.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.000 EAL: request: mp_malloc_sync 00:04:54.000 EAL: No shared files mode enabled, IPC is disabled 00:04:54.000 EAL: Heap on socket 0 was expanded by 1026MB 00:04:54.259 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.259 passed 00:04:54.259 00:04:54.259 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.259 suites 1 1 n/a 0 0 00:04:54.259 tests 2 2 2 0 0 00:04:54.259 asserts 5358 5358 5358 0 n/a 00:04:54.259 00:04:54.259 Elapsed time = 1.252 seconds 00:04:54.259 EAL: request: mp_malloc_sync 00:04:54.259 EAL: No shared files mode enabled, IPC is disabled 00:04:54.259 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:54.259 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.259 EAL: request: mp_malloc_sync 00:04:54.259 EAL: No shared files mode enabled, IPC is disabled 00:04:54.259 EAL: Heap on socket 0 was shrunk by 2MB 00:04:54.259 EAL: No shared files mode enabled, IPC is disabled 00:04:54.259 EAL: No shared files mode enabled, IPC is disabled 00:04:54.259 EAL: No shared files mode enabled, IPC is disabled 00:04:54.259 00:04:54.259 real 0m1.448s 00:04:54.259 user 0m0.799s 00:04:54.259 sys 0m0.515s 00:04:54.259 16:53:44 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.259 16:53:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:54.259 ************************************ 00:04:54.259 END TEST env_vtophys 00:04:54.259 ************************************ 00:04:54.518 16:53:44 env -- common/autotest_common.sh@1142 -- # return 0 00:04:54.518 16:53:44 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:54.518 16:53:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.518 16:53:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.518 16:53:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.518 ************************************ 00:04:54.518 START TEST env_pci 00:04:54.518 ************************************ 00:04:54.518 16:53:44 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:54.518 00:04:54.518 00:04:54.518 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.518 http://cunit.sourceforge.net/ 00:04:54.518 00:04:54.518 00:04:54.518 Suite: pci 00:04:54.518 Test: pci_hook ...[2024-07-15 16:53:44.609969] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58501 has claimed it 00:04:54.518 passed 00:04:54.518 00:04:54.518 EAL: Cannot find device (10000:00:01.0) 00:04:54.518 EAL: Failed to attach device on primary process 00:04:54.518 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.518 suites 1 1 n/a 0 0 00:04:54.518 tests 1 1 1 0 0 00:04:54.518 asserts 25 25 25 0 n/a 00:04:54.518 00:04:54.518 Elapsed time = 0.003 seconds 00:04:54.518 00:04:54.518 real 0m0.019s 00:04:54.518 user 0m0.006s 00:04:54.518 sys 0m0.013s 00:04:54.518 16:53:44 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.518 ************************************ 00:04:54.518 END TEST env_pci 00:04:54.518 ************************************ 00:04:54.518 16:53:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:54.518 16:53:44 env -- common/autotest_common.sh@1142 -- # return 0 00:04:54.518 16:53:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:54.518 16:53:44 env -- env/env.sh@15 -- # uname 00:04:54.518 16:53:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:54.518 16:53:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:54.518 16:53:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.518 16:53:44 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:54.518 16:53:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.518 16:53:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.518 ************************************ 00:04:54.518 START TEST env_dpdk_post_init 00:04:54.518 ************************************ 00:04:54.518 16:53:44 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.518 EAL: Detected CPU lcores: 10 00:04:54.518 EAL: Detected NUMA nodes: 1 00:04:54.518 EAL: Detected shared linkage of DPDK 00:04:54.518 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.518 EAL: Selected IOVA mode 'PA' 00:04:54.518 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.777 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:54.777 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:54.777 Starting DPDK initialization... 00:04:54.777 Starting SPDK post initialization... 00:04:54.777 SPDK NVMe probe 00:04:54.777 Attaching to 0000:00:10.0 00:04:54.777 Attaching to 0000:00:11.0 00:04:54.777 Attached to 0000:00:10.0 00:04:54.777 Attached to 0000:00:11.0 00:04:54.777 Cleaning up... 00:04:54.777 00:04:54.777 real 0m0.176s 00:04:54.777 user 0m0.042s 00:04:54.777 sys 0m0.034s 00:04:54.777 16:53:44 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.777 16:53:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.777 ************************************ 00:04:54.777 END TEST env_dpdk_post_init 00:04:54.777 ************************************ 00:04:54.777 16:53:44 env -- common/autotest_common.sh@1142 -- # return 0 00:04:54.777 16:53:44 env -- env/env.sh@26 -- # uname 00:04:54.777 16:53:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:54.777 16:53:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:54.777 16:53:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.777 16:53:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.777 16:53:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.777 ************************************ 00:04:54.777 START TEST env_mem_callbacks 00:04:54.777 ************************************ 00:04:54.777 16:53:44 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:54.777 EAL: Detected CPU lcores: 10 00:04:54.777 EAL: Detected NUMA nodes: 1 00:04:54.777 EAL: Detected shared linkage of DPDK 00:04:54.777 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.777 EAL: Selected IOVA mode 'PA' 00:04:54.777 00:04:54.777 00:04:54.777 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.777 http://cunit.sourceforge.net/ 00:04:54.777 00:04:54.777 00:04:54.777 Suite: memory 00:04:54.777 Test: test ... 00:04:54.777 register 0x200000200000 2097152 00:04:54.777 malloc 3145728 00:04:54.777 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.777 register 0x200000400000 4194304 00:04:54.777 buf 0x200000500000 len 3145728 PASSED 00:04:54.777 malloc 64 00:04:54.777 buf 0x2000004fff40 len 64 PASSED 00:04:54.777 malloc 4194304 00:04:54.777 register 0x200000800000 6291456 00:04:54.777 buf 0x200000a00000 len 4194304 PASSED 00:04:54.777 free 0x200000500000 3145728 00:04:54.777 free 0x2000004fff40 64 00:04:54.777 unregister 0x200000400000 4194304 PASSED 00:04:54.777 free 0x200000a00000 4194304 00:04:54.777 unregister 0x200000800000 6291456 PASSED 00:04:54.777 malloc 8388608 00:04:54.777 register 0x200000400000 10485760 00:04:54.777 buf 0x200000600000 len 8388608 PASSED 00:04:54.777 free 0x200000600000 8388608 00:04:54.777 unregister 0x200000400000 10485760 PASSED 00:04:54.777 passed 00:04:54.777 00:04:54.777 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.777 suites 1 1 n/a 0 0 00:04:54.777 tests 1 1 1 0 0 00:04:54.777 asserts 15 15 15 0 n/a 00:04:54.777 00:04:54.777 Elapsed time = 0.009 seconds 00:04:54.777 00:04:54.777 real 0m0.143s 00:04:54.777 user 0m0.018s 00:04:54.777 sys 0m0.025s 00:04:54.777 16:53:45 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.777 16:53:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:54.777 ************************************ 00:04:54.777 END TEST env_mem_callbacks 00:04:54.777 ************************************ 00:04:55.037 16:53:45 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.037 00:04:55.037 real 0m2.366s 00:04:55.037 user 0m1.195s 00:04:55.037 sys 0m0.816s 00:04:55.037 16:53:45 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.037 16:53:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.037 ************************************ 00:04:55.037 END TEST env 00:04:55.037 ************************************ 00:04:55.037 16:53:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.037 16:53:45 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:55.037 16:53:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.037 16:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.037 16:53:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.037 ************************************ 00:04:55.037 START TEST rpc 00:04:55.037 ************************************ 00:04:55.037 16:53:45 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:55.037 * Looking for test storage... 00:04:55.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:55.037 16:53:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58616 00:04:55.037 16:53:45 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:55.037 16:53:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.037 16:53:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58616 00:04:55.037 16:53:45 rpc -- common/autotest_common.sh@829 -- # '[' -z 58616 ']' 00:04:55.037 16:53:45 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.037 16:53:45 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.037 16:53:45 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.037 16:53:45 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.037 16:53:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.037 [2024-07-15 16:53:45.269317] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:04:55.037 [2024-07-15 16:53:45.269452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58616 ] 00:04:55.295 [2024-07-15 16:53:45.404833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.295 [2024-07-15 16:53:45.511130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:55.295 [2024-07-15 16:53:45.511206] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58616' to capture a snapshot of events at runtime. 00:04:55.295 [2024-07-15 16:53:45.511218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:55.295 [2024-07-15 16:53:45.511227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:55.296 [2024-07-15 16:53:45.511234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58616 for offline analysis/debug. 00:04:55.296 [2024-07-15 16:53:45.511265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.296 [2024-07-15 16:53:45.565114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:56.230 16:53:46 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.230 16:53:46 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:56.230 16:53:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:56.231 16:53:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:56.231 16:53:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:56.231 16:53:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:56.231 16:53:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.231 16:53:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.231 16:53:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 ************************************ 00:04:56.231 START TEST rpc_integrity 00:04:56.231 ************************************ 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.231 { 00:04:56.231 "name": "Malloc0", 00:04:56.231 "aliases": [ 00:04:56.231 "48b19030-245a-4995-8542-e51de6f1e28a" 00:04:56.231 ], 00:04:56.231 "product_name": "Malloc disk", 00:04:56.231 "block_size": 512, 00:04:56.231 "num_blocks": 16384, 00:04:56.231 "uuid": "48b19030-245a-4995-8542-e51de6f1e28a", 00:04:56.231 "assigned_rate_limits": { 00:04:56.231 "rw_ios_per_sec": 0, 00:04:56.231 "rw_mbytes_per_sec": 0, 00:04:56.231 "r_mbytes_per_sec": 0, 00:04:56.231 "w_mbytes_per_sec": 0 00:04:56.231 }, 00:04:56.231 "claimed": false, 00:04:56.231 "zoned": false, 00:04:56.231 "supported_io_types": { 00:04:56.231 "read": true, 00:04:56.231 "write": true, 00:04:56.231 "unmap": true, 00:04:56.231 "flush": true, 00:04:56.231 "reset": true, 00:04:56.231 "nvme_admin": false, 00:04:56.231 "nvme_io": false, 00:04:56.231 "nvme_io_md": false, 00:04:56.231 "write_zeroes": true, 00:04:56.231 "zcopy": true, 00:04:56.231 "get_zone_info": false, 00:04:56.231 "zone_management": false, 00:04:56.231 "zone_append": false, 00:04:56.231 "compare": false, 00:04:56.231 "compare_and_write": false, 00:04:56.231 "abort": true, 00:04:56.231 "seek_hole": false, 00:04:56.231 "seek_data": false, 00:04:56.231 "copy": true, 00:04:56.231 "nvme_iov_md": false 00:04:56.231 }, 00:04:56.231 "memory_domains": [ 00:04:56.231 { 00:04:56.231 "dma_device_id": "system", 00:04:56.231 "dma_device_type": 1 00:04:56.231 }, 00:04:56.231 { 00:04:56.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.231 "dma_device_type": 2 00:04:56.231 } 00:04:56.231 ], 00:04:56.231 "driver_specific": {} 00:04:56.231 } 00:04:56.231 ]' 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 [2024-07-15 16:53:46.364176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:56.231 [2024-07-15 16:53:46.364234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.231 [2024-07-15 16:53:46.364257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21b7da0 00:04:56.231 [2024-07-15 16:53:46.364267] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.231 [2024-07-15 16:53:46.365953] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.231 [2024-07-15 16:53:46.365989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.231 Passthru0 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.231 { 00:04:56.231 "name": "Malloc0", 00:04:56.231 "aliases": [ 00:04:56.231 "48b19030-245a-4995-8542-e51de6f1e28a" 00:04:56.231 ], 00:04:56.231 "product_name": "Malloc disk", 00:04:56.231 "block_size": 512, 00:04:56.231 "num_blocks": 16384, 00:04:56.231 "uuid": "48b19030-245a-4995-8542-e51de6f1e28a", 00:04:56.231 "assigned_rate_limits": { 00:04:56.231 "rw_ios_per_sec": 0, 00:04:56.231 "rw_mbytes_per_sec": 0, 00:04:56.231 "r_mbytes_per_sec": 0, 00:04:56.231 "w_mbytes_per_sec": 0 00:04:56.231 }, 00:04:56.231 "claimed": true, 00:04:56.231 "claim_type": "exclusive_write", 00:04:56.231 "zoned": false, 00:04:56.231 "supported_io_types": { 00:04:56.231 "read": true, 00:04:56.231 "write": true, 00:04:56.231 "unmap": true, 00:04:56.231 "flush": true, 00:04:56.231 "reset": true, 00:04:56.231 "nvme_admin": false, 00:04:56.231 "nvme_io": false, 00:04:56.231 "nvme_io_md": false, 00:04:56.231 "write_zeroes": true, 00:04:56.231 "zcopy": true, 00:04:56.231 "get_zone_info": false, 00:04:56.231 "zone_management": false, 00:04:56.231 "zone_append": false, 00:04:56.231 "compare": false, 00:04:56.231 "compare_and_write": false, 00:04:56.231 "abort": true, 00:04:56.231 "seek_hole": false, 00:04:56.231 "seek_data": false, 00:04:56.231 "copy": true, 00:04:56.231 "nvme_iov_md": false 00:04:56.231 }, 00:04:56.231 "memory_domains": [ 00:04:56.231 { 00:04:56.231 "dma_device_id": "system", 00:04:56.231 "dma_device_type": 1 00:04:56.231 }, 00:04:56.231 { 00:04:56.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.231 "dma_device_type": 2 00:04:56.231 } 00:04:56.231 ], 00:04:56.231 "driver_specific": {} 00:04:56.231 }, 00:04:56.231 { 00:04:56.231 "name": "Passthru0", 00:04:56.231 "aliases": [ 00:04:56.231 "b25a7d47-ffe9-5d24-874a-36978520dba8" 00:04:56.231 ], 00:04:56.231 "product_name": "passthru", 00:04:56.231 "block_size": 512, 00:04:56.231 "num_blocks": 16384, 00:04:56.231 "uuid": "b25a7d47-ffe9-5d24-874a-36978520dba8", 00:04:56.231 "assigned_rate_limits": { 00:04:56.231 "rw_ios_per_sec": 0, 00:04:56.231 "rw_mbytes_per_sec": 0, 00:04:56.231 "r_mbytes_per_sec": 0, 00:04:56.231 "w_mbytes_per_sec": 0 00:04:56.231 }, 00:04:56.231 "claimed": false, 00:04:56.231 "zoned": false, 00:04:56.231 "supported_io_types": { 00:04:56.231 "read": true, 00:04:56.231 "write": true, 00:04:56.231 "unmap": true, 00:04:56.231 "flush": true, 00:04:56.231 "reset": true, 00:04:56.231 "nvme_admin": false, 00:04:56.231 "nvme_io": false, 00:04:56.231 "nvme_io_md": false, 00:04:56.231 "write_zeroes": true, 00:04:56.231 "zcopy": true, 00:04:56.231 "get_zone_info": false, 00:04:56.231 "zone_management": false, 00:04:56.231 "zone_append": false, 00:04:56.231 "compare": false, 00:04:56.231 "compare_and_write": false, 00:04:56.231 "abort": true, 00:04:56.231 "seek_hole": false, 00:04:56.231 "seek_data": false, 00:04:56.231 "copy": true, 00:04:56.231 "nvme_iov_md": false 00:04:56.231 }, 00:04:56.231 "memory_domains": [ 00:04:56.231 { 00:04:56.231 "dma_device_id": "system", 00:04:56.231 "dma_device_type": 1 00:04:56.231 }, 00:04:56.231 { 00:04:56.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.231 "dma_device_type": 2 00:04:56.231 } 00:04:56.231 ], 00:04:56.231 "driver_specific": { 00:04:56.231 "passthru": { 00:04:56.231 "name": "Passthru0", 00:04:56.231 "base_bdev_name": "Malloc0" 00:04:56.231 } 00:04:56.231 } 00:04:56.231 } 00:04:56.231 ]' 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.231 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.231 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.232 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.232 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.232 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.232 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.232 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.232 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.232 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:56.490 16:53:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.490 00:04:56.490 real 0m0.331s 00:04:56.490 user 0m0.228s 00:04:56.490 sys 0m0.038s 00:04:56.490 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.490 16:53:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.490 ************************************ 00:04:56.490 END TEST rpc_integrity 00:04:56.490 ************************************ 00:04:56.490 16:53:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:56.490 16:53:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:56.490 16:53:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.490 16:53:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.491 16:53:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.491 ************************************ 00:04:56.491 START TEST rpc_plugins 00:04:56.491 ************************************ 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:56.491 16:53:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.491 16:53:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:56.491 16:53:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.491 16:53:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:56.491 { 00:04:56.491 "name": "Malloc1", 00:04:56.491 "aliases": [ 00:04:56.491 "c44bcd1f-9427-48f7-8d8e-02b6fd505203" 00:04:56.491 ], 00:04:56.491 "product_name": "Malloc disk", 00:04:56.491 "block_size": 4096, 00:04:56.491 "num_blocks": 256, 00:04:56.491 "uuid": "c44bcd1f-9427-48f7-8d8e-02b6fd505203", 00:04:56.491 "assigned_rate_limits": { 00:04:56.491 "rw_ios_per_sec": 0, 00:04:56.491 "rw_mbytes_per_sec": 0, 00:04:56.491 "r_mbytes_per_sec": 0, 00:04:56.491 "w_mbytes_per_sec": 0 00:04:56.491 }, 00:04:56.491 "claimed": false, 00:04:56.491 "zoned": false, 00:04:56.491 "supported_io_types": { 00:04:56.491 "read": true, 00:04:56.491 "write": true, 00:04:56.491 "unmap": true, 00:04:56.491 "flush": true, 00:04:56.491 "reset": true, 00:04:56.491 "nvme_admin": false, 00:04:56.491 "nvme_io": false, 00:04:56.491 "nvme_io_md": false, 00:04:56.491 "write_zeroes": true, 00:04:56.491 "zcopy": true, 00:04:56.491 "get_zone_info": false, 00:04:56.491 "zone_management": false, 00:04:56.491 "zone_append": false, 00:04:56.491 "compare": false, 00:04:56.491 "compare_and_write": false, 00:04:56.491 "abort": true, 00:04:56.491 "seek_hole": false, 00:04:56.491 "seek_data": false, 00:04:56.491 "copy": true, 00:04:56.491 "nvme_iov_md": false 00:04:56.491 }, 00:04:56.491 "memory_domains": [ 00:04:56.491 { 00:04:56.491 "dma_device_id": "system", 00:04:56.491 "dma_device_type": 1 00:04:56.491 }, 00:04:56.491 { 00:04:56.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.491 "dma_device_type": 2 00:04:56.491 } 00:04:56.491 ], 00:04:56.491 "driver_specific": {} 00:04:56.491 } 00:04:56.491 ]' 00:04:56.491 16:53:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:56.491 16:53:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:56.491 16:53:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.491 16:53:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.491 16:53:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:56.491 16:53:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:56.491 16:53:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:56.491 00:04:56.491 real 0m0.164s 00:04:56.491 user 0m0.106s 00:04:56.491 sys 0m0.020s 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.491 16:53:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.491 ************************************ 00:04:56.491 END TEST rpc_plugins 00:04:56.491 ************************************ 00:04:56.749 16:53:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:56.749 16:53:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:56.749 16:53:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.749 16:53:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.749 16:53:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.749 ************************************ 00:04:56.749 START TEST rpc_trace_cmd_test 00:04:56.749 ************************************ 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:56.749 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58616", 00:04:56.749 "tpoint_group_mask": "0x8", 00:04:56.749 "iscsi_conn": { 00:04:56.749 "mask": "0x2", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "scsi": { 00:04:56.749 "mask": "0x4", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "bdev": { 00:04:56.749 "mask": "0x8", 00:04:56.749 "tpoint_mask": "0xffffffffffffffff" 00:04:56.749 }, 00:04:56.749 "nvmf_rdma": { 00:04:56.749 "mask": "0x10", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "nvmf_tcp": { 00:04:56.749 "mask": "0x20", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "ftl": { 00:04:56.749 "mask": "0x40", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "blobfs": { 00:04:56.749 "mask": "0x80", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "dsa": { 00:04:56.749 "mask": "0x200", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "thread": { 00:04:56.749 "mask": "0x400", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "nvme_pcie": { 00:04:56.749 "mask": "0x800", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "iaa": { 00:04:56.749 "mask": "0x1000", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "nvme_tcp": { 00:04:56.749 "mask": "0x2000", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "bdev_nvme": { 00:04:56.749 "mask": "0x4000", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 }, 00:04:56.749 "sock": { 00:04:56.749 "mask": "0x8000", 00:04:56.749 "tpoint_mask": "0x0" 00:04:56.749 } 00:04:56.749 }' 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:56.749 16:53:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:56.749 16:53:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:56.749 00:04:56.749 real 0m0.244s 00:04:56.749 user 0m0.209s 00:04:56.749 sys 0m0.026s 00:04:56.749 16:53:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.749 16:53:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:56.749 ************************************ 00:04:56.749 END TEST rpc_trace_cmd_test 00:04:56.749 ************************************ 00:04:57.007 16:53:47 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.007 16:53:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:57.007 16:53:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:57.007 16:53:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:57.007 16:53:47 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.007 16:53:47 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.007 16:53:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.007 ************************************ 00:04:57.007 START TEST rpc_daemon_integrity 00:04:57.007 ************************************ 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.007 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:57.008 { 00:04:57.008 "name": "Malloc2", 00:04:57.008 "aliases": [ 00:04:57.008 "1a73ab90-97fe-45ba-bb85-f179270ca5d3" 00:04:57.008 ], 00:04:57.008 "product_name": "Malloc disk", 00:04:57.008 "block_size": 512, 00:04:57.008 "num_blocks": 16384, 00:04:57.008 "uuid": "1a73ab90-97fe-45ba-bb85-f179270ca5d3", 00:04:57.008 "assigned_rate_limits": { 00:04:57.008 "rw_ios_per_sec": 0, 00:04:57.008 "rw_mbytes_per_sec": 0, 00:04:57.008 "r_mbytes_per_sec": 0, 00:04:57.008 "w_mbytes_per_sec": 0 00:04:57.008 }, 00:04:57.008 "claimed": false, 00:04:57.008 "zoned": false, 00:04:57.008 "supported_io_types": { 00:04:57.008 "read": true, 00:04:57.008 "write": true, 00:04:57.008 "unmap": true, 00:04:57.008 "flush": true, 00:04:57.008 "reset": true, 00:04:57.008 "nvme_admin": false, 00:04:57.008 "nvme_io": false, 00:04:57.008 "nvme_io_md": false, 00:04:57.008 "write_zeroes": true, 00:04:57.008 "zcopy": true, 00:04:57.008 "get_zone_info": false, 00:04:57.008 "zone_management": false, 00:04:57.008 "zone_append": false, 00:04:57.008 "compare": false, 00:04:57.008 "compare_and_write": false, 00:04:57.008 "abort": true, 00:04:57.008 "seek_hole": false, 00:04:57.008 "seek_data": false, 00:04:57.008 "copy": true, 00:04:57.008 "nvme_iov_md": false 00:04:57.008 }, 00:04:57.008 "memory_domains": [ 00:04:57.008 { 00:04:57.008 "dma_device_id": "system", 00:04:57.008 "dma_device_type": 1 00:04:57.008 }, 00:04:57.008 { 00:04:57.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.008 "dma_device_type": 2 00:04:57.008 } 00:04:57.008 ], 00:04:57.008 "driver_specific": {} 00:04:57.008 } 00:04:57.008 ]' 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.008 [2024-07-15 16:53:47.248886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:57.008 [2024-07-15 16:53:47.248966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:57.008 [2024-07-15 16:53:47.248989] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x221cbe0 00:04:57.008 [2024-07-15 16:53:47.248999] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:57.008 [2024-07-15 16:53:47.250603] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:57.008 [2024-07-15 16:53:47.250638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:57.008 Passthru0 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:57.008 { 00:04:57.008 "name": "Malloc2", 00:04:57.008 "aliases": [ 00:04:57.008 "1a73ab90-97fe-45ba-bb85-f179270ca5d3" 00:04:57.008 ], 00:04:57.008 "product_name": "Malloc disk", 00:04:57.008 "block_size": 512, 00:04:57.008 "num_blocks": 16384, 00:04:57.008 "uuid": "1a73ab90-97fe-45ba-bb85-f179270ca5d3", 00:04:57.008 "assigned_rate_limits": { 00:04:57.008 "rw_ios_per_sec": 0, 00:04:57.008 "rw_mbytes_per_sec": 0, 00:04:57.008 "r_mbytes_per_sec": 0, 00:04:57.008 "w_mbytes_per_sec": 0 00:04:57.008 }, 00:04:57.008 "claimed": true, 00:04:57.008 "claim_type": "exclusive_write", 00:04:57.008 "zoned": false, 00:04:57.008 "supported_io_types": { 00:04:57.008 "read": true, 00:04:57.008 "write": true, 00:04:57.008 "unmap": true, 00:04:57.008 "flush": true, 00:04:57.008 "reset": true, 00:04:57.008 "nvme_admin": false, 00:04:57.008 "nvme_io": false, 00:04:57.008 "nvme_io_md": false, 00:04:57.008 "write_zeroes": true, 00:04:57.008 "zcopy": true, 00:04:57.008 "get_zone_info": false, 00:04:57.008 "zone_management": false, 00:04:57.008 "zone_append": false, 00:04:57.008 "compare": false, 00:04:57.008 "compare_and_write": false, 00:04:57.008 "abort": true, 00:04:57.008 "seek_hole": false, 00:04:57.008 "seek_data": false, 00:04:57.008 "copy": true, 00:04:57.008 "nvme_iov_md": false 00:04:57.008 }, 00:04:57.008 "memory_domains": [ 00:04:57.008 { 00:04:57.008 "dma_device_id": "system", 00:04:57.008 "dma_device_type": 1 00:04:57.008 }, 00:04:57.008 { 00:04:57.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.008 "dma_device_type": 2 00:04:57.008 } 00:04:57.008 ], 00:04:57.008 "driver_specific": {} 00:04:57.008 }, 00:04:57.008 { 00:04:57.008 "name": "Passthru0", 00:04:57.008 "aliases": [ 00:04:57.008 "53d94d81-dbb5-5a9d-97bc-b9747350ab03" 00:04:57.008 ], 00:04:57.008 "product_name": "passthru", 00:04:57.008 "block_size": 512, 00:04:57.008 "num_blocks": 16384, 00:04:57.008 "uuid": "53d94d81-dbb5-5a9d-97bc-b9747350ab03", 00:04:57.008 "assigned_rate_limits": { 00:04:57.008 "rw_ios_per_sec": 0, 00:04:57.008 "rw_mbytes_per_sec": 0, 00:04:57.008 "r_mbytes_per_sec": 0, 00:04:57.008 "w_mbytes_per_sec": 0 00:04:57.008 }, 00:04:57.008 "claimed": false, 00:04:57.008 "zoned": false, 00:04:57.008 "supported_io_types": { 00:04:57.008 "read": true, 00:04:57.008 "write": true, 00:04:57.008 "unmap": true, 00:04:57.008 "flush": true, 00:04:57.008 "reset": true, 00:04:57.008 "nvme_admin": false, 00:04:57.008 "nvme_io": false, 00:04:57.008 "nvme_io_md": false, 00:04:57.008 "write_zeroes": true, 00:04:57.008 "zcopy": true, 00:04:57.008 "get_zone_info": false, 00:04:57.008 "zone_management": false, 00:04:57.008 "zone_append": false, 00:04:57.008 "compare": false, 00:04:57.008 "compare_and_write": false, 00:04:57.008 "abort": true, 00:04:57.008 "seek_hole": false, 00:04:57.008 "seek_data": false, 00:04:57.008 "copy": true, 00:04:57.008 "nvme_iov_md": false 00:04:57.008 }, 00:04:57.008 "memory_domains": [ 00:04:57.008 { 00:04:57.008 "dma_device_id": "system", 00:04:57.008 "dma_device_type": 1 00:04:57.008 }, 00:04:57.008 { 00:04:57.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.008 "dma_device_type": 2 00:04:57.008 } 00:04:57.008 ], 00:04:57.008 "driver_specific": { 00:04:57.008 "passthru": { 00:04:57.008 "name": "Passthru0", 00:04:57.008 "base_bdev_name": "Malloc2" 00:04:57.008 } 00:04:57.008 } 00:04:57.008 } 00:04:57.008 ]' 00:04:57.008 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.266 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.267 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.267 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:57.267 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:57.267 16:53:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.267 00:04:57.267 real 0m0.326s 00:04:57.267 user 0m0.218s 00:04:57.267 sys 0m0.045s 00:04:57.267 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.267 ************************************ 00:04:57.267 END TEST rpc_daemon_integrity 00:04:57.267 16:53:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.267 ************************************ 00:04:57.267 16:53:47 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.267 16:53:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:57.267 16:53:47 rpc -- rpc/rpc.sh@84 -- # killprocess 58616 00:04:57.267 16:53:47 rpc -- common/autotest_common.sh@948 -- # '[' -z 58616 ']' 00:04:57.267 16:53:47 rpc -- common/autotest_common.sh@952 -- # kill -0 58616 00:04:57.267 16:53:47 rpc -- common/autotest_common.sh@953 -- # uname 00:04:57.267 16:53:47 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.267 16:53:47 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58616 00:04:57.267 16:53:47 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.267 16:53:47 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.267 killing process with pid 58616 00:04:57.267 16:53:47 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58616' 00:04:57.267 16:53:47 rpc -- common/autotest_common.sh@967 -- # kill 58616 00:04:57.267 16:53:47 rpc -- common/autotest_common.sh@972 -- # wait 58616 00:04:57.832 00:04:57.832 real 0m2.751s 00:04:57.832 user 0m3.554s 00:04:57.832 sys 0m0.658s 00:04:57.832 16:53:47 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.832 16:53:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.832 ************************************ 00:04:57.832 END TEST rpc 00:04:57.832 ************************************ 00:04:57.832 16:53:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:57.832 16:53:47 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:57.832 16:53:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.832 16:53:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.832 16:53:47 -- common/autotest_common.sh@10 -- # set +x 00:04:57.832 ************************************ 00:04:57.832 START TEST skip_rpc 00:04:57.832 ************************************ 00:04:57.832 16:53:47 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:57.832 * Looking for test storage... 00:04:57.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:57.832 16:53:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:57.832 16:53:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.832 16:53:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:57.832 16:53:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.832 16:53:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.832 16:53:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.832 ************************************ 00:04:57.832 START TEST skip_rpc 00:04:57.832 ************************************ 00:04:57.832 16:53:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:57.832 16:53:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58814 00:04:57.832 16:53:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:57.832 16:53:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.832 16:53:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:57.832 [2024-07-15 16:53:48.078461] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:04:57.832 [2024-07-15 16:53:48.078572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58814 ] 00:04:58.089 [2024-07-15 16:53:48.217668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.089 [2024-07-15 16:53:48.299352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.089 [2024-07-15 16:53:48.353825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:03.396 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58814 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58814 ']' 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58814 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58814 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58814' 00:05:03.397 killing process with pid 58814 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58814 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58814 00:05:03.397 00:05:03.397 real 0m5.426s 00:05:03.397 user 0m5.047s 00:05:03.397 sys 0m0.286s 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.397 ************************************ 00:05:03.397 END TEST skip_rpc 00:05:03.397 ************************************ 00:05:03.397 16:53:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.397 16:53:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:03.397 16:53:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:03.397 16:53:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.397 16:53:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.397 16:53:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.397 ************************************ 00:05:03.397 START TEST skip_rpc_with_json 00:05:03.397 ************************************ 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58895 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58895 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 58895 ']' 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.397 16:53:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.397 [2024-07-15 16:53:53.581566] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:03.397 [2024-07-15 16:53:53.581867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58895 ] 00:05:03.655 [2024-07-15 16:53:53.720486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.655 [2024-07-15 16:53:53.830845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.655 [2024-07-15 16:53:53.885848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.222 [2024-07-15 16:53:54.471985] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:04.222 request: 00:05:04.222 { 00:05:04.222 "trtype": "tcp", 00:05:04.222 "method": "nvmf_get_transports", 00:05:04.222 "req_id": 1 00:05:04.222 } 00:05:04.222 Got JSON-RPC error response 00:05:04.222 response: 00:05:04.222 { 00:05:04.222 "code": -19, 00:05:04.222 "message": "No such device" 00:05:04.222 } 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.222 [2024-07-15 16:53:54.484120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.222 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.481 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.481 16:53:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.481 { 00:05:04.481 "subsystems": [ 00:05:04.481 { 00:05:04.481 "subsystem": "keyring", 00:05:04.481 "config": [] 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "subsystem": "iobuf", 00:05:04.481 "config": [ 00:05:04.481 { 00:05:04.481 "method": "iobuf_set_options", 00:05:04.481 "params": { 00:05:04.481 "small_pool_count": 8192, 00:05:04.481 "large_pool_count": 1024, 00:05:04.481 "small_bufsize": 8192, 00:05:04.481 "large_bufsize": 135168 00:05:04.481 } 00:05:04.481 } 00:05:04.481 ] 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "subsystem": "sock", 00:05:04.481 "config": [ 00:05:04.481 { 00:05:04.481 "method": "sock_set_default_impl", 00:05:04.481 "params": { 00:05:04.481 "impl_name": "uring" 00:05:04.481 } 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "method": "sock_impl_set_options", 00:05:04.481 "params": { 00:05:04.481 "impl_name": "ssl", 00:05:04.481 "recv_buf_size": 4096, 00:05:04.481 "send_buf_size": 4096, 00:05:04.481 "enable_recv_pipe": true, 00:05:04.481 "enable_quickack": false, 00:05:04.481 "enable_placement_id": 0, 00:05:04.481 "enable_zerocopy_send_server": true, 00:05:04.481 "enable_zerocopy_send_client": false, 00:05:04.481 "zerocopy_threshold": 0, 00:05:04.481 "tls_version": 0, 00:05:04.481 "enable_ktls": false 00:05:04.481 } 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "method": "sock_impl_set_options", 00:05:04.481 "params": { 00:05:04.481 "impl_name": "posix", 00:05:04.481 "recv_buf_size": 2097152, 00:05:04.481 "send_buf_size": 2097152, 00:05:04.481 "enable_recv_pipe": true, 00:05:04.481 "enable_quickack": false, 00:05:04.481 "enable_placement_id": 0, 00:05:04.481 "enable_zerocopy_send_server": true, 00:05:04.481 "enable_zerocopy_send_client": false, 00:05:04.481 "zerocopy_threshold": 0, 00:05:04.481 "tls_version": 0, 00:05:04.481 "enable_ktls": false 00:05:04.481 } 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "method": "sock_impl_set_options", 00:05:04.481 "params": { 00:05:04.481 "impl_name": "uring", 00:05:04.481 "recv_buf_size": 2097152, 00:05:04.481 "send_buf_size": 2097152, 00:05:04.481 "enable_recv_pipe": true, 00:05:04.481 "enable_quickack": false, 00:05:04.481 "enable_placement_id": 0, 00:05:04.481 "enable_zerocopy_send_server": false, 00:05:04.481 "enable_zerocopy_send_client": false, 00:05:04.481 "zerocopy_threshold": 0, 00:05:04.481 "tls_version": 0, 00:05:04.481 "enable_ktls": false 00:05:04.481 } 00:05:04.481 } 00:05:04.481 ] 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "subsystem": "vmd", 00:05:04.481 "config": [] 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "subsystem": "accel", 00:05:04.481 "config": [ 00:05:04.481 { 00:05:04.481 "method": "accel_set_options", 00:05:04.481 "params": { 00:05:04.481 "small_cache_size": 128, 00:05:04.481 "large_cache_size": 16, 00:05:04.481 "task_count": 2048, 00:05:04.481 "sequence_count": 2048, 00:05:04.481 "buf_count": 2048 00:05:04.481 } 00:05:04.481 } 00:05:04.481 ] 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "subsystem": "bdev", 00:05:04.481 "config": [ 00:05:04.481 { 00:05:04.481 "method": "bdev_set_options", 00:05:04.481 "params": { 00:05:04.481 "bdev_io_pool_size": 65535, 00:05:04.481 "bdev_io_cache_size": 256, 00:05:04.481 "bdev_auto_examine": true, 00:05:04.481 "iobuf_small_cache_size": 128, 00:05:04.481 "iobuf_large_cache_size": 16 00:05:04.481 } 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "method": "bdev_raid_set_options", 00:05:04.481 "params": { 00:05:04.481 "process_window_size_kb": 1024 00:05:04.481 } 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "method": "bdev_iscsi_set_options", 00:05:04.481 "params": { 00:05:04.481 "timeout_sec": 30 00:05:04.481 } 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "method": "bdev_nvme_set_options", 00:05:04.481 "params": { 00:05:04.481 "action_on_timeout": "none", 00:05:04.481 "timeout_us": 0, 00:05:04.481 "timeout_admin_us": 0, 00:05:04.481 "keep_alive_timeout_ms": 10000, 00:05:04.481 "arbitration_burst": 0, 00:05:04.481 "low_priority_weight": 0, 00:05:04.481 "medium_priority_weight": 0, 00:05:04.481 "high_priority_weight": 0, 00:05:04.481 "nvme_adminq_poll_period_us": 10000, 00:05:04.481 "nvme_ioq_poll_period_us": 0, 00:05:04.481 "io_queue_requests": 0, 00:05:04.481 "delay_cmd_submit": true, 00:05:04.481 "transport_retry_count": 4, 00:05:04.481 "bdev_retry_count": 3, 00:05:04.481 "transport_ack_timeout": 0, 00:05:04.481 "ctrlr_loss_timeout_sec": 0, 00:05:04.481 "reconnect_delay_sec": 0, 00:05:04.481 "fast_io_fail_timeout_sec": 0, 00:05:04.481 "disable_auto_failback": false, 00:05:04.481 "generate_uuids": false, 00:05:04.481 "transport_tos": 0, 00:05:04.481 "nvme_error_stat": false, 00:05:04.481 "rdma_srq_size": 0, 00:05:04.481 "io_path_stat": false, 00:05:04.481 "allow_accel_sequence": false, 00:05:04.481 "rdma_max_cq_size": 0, 00:05:04.481 "rdma_cm_event_timeout_ms": 0, 00:05:04.481 "dhchap_digests": [ 00:05:04.481 "sha256", 00:05:04.481 "sha384", 00:05:04.481 "sha512" 00:05:04.481 ], 00:05:04.481 "dhchap_dhgroups": [ 00:05:04.481 "null", 00:05:04.481 "ffdhe2048", 00:05:04.481 "ffdhe3072", 00:05:04.481 "ffdhe4096", 00:05:04.481 "ffdhe6144", 00:05:04.481 "ffdhe8192" 00:05:04.481 ] 00:05:04.481 } 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "method": "bdev_nvme_set_hotplug", 00:05:04.481 "params": { 00:05:04.481 "period_us": 100000, 00:05:04.481 "enable": false 00:05:04.481 } 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "method": "bdev_wait_for_examine" 00:05:04.481 } 00:05:04.481 ] 00:05:04.481 }, 00:05:04.481 { 00:05:04.481 "subsystem": "scsi", 00:05:04.482 "config": null 00:05:04.482 }, 00:05:04.482 { 00:05:04.482 "subsystem": "scheduler", 00:05:04.482 "config": [ 00:05:04.482 { 00:05:04.482 "method": "framework_set_scheduler", 00:05:04.482 "params": { 00:05:04.482 "name": "static" 00:05:04.482 } 00:05:04.482 } 00:05:04.482 ] 00:05:04.482 }, 00:05:04.482 { 00:05:04.482 "subsystem": "vhost_scsi", 00:05:04.482 "config": [] 00:05:04.482 }, 00:05:04.482 { 00:05:04.482 "subsystem": "vhost_blk", 00:05:04.482 "config": [] 00:05:04.482 }, 00:05:04.482 { 00:05:04.482 "subsystem": "ublk", 00:05:04.482 "config": [] 00:05:04.482 }, 00:05:04.482 { 00:05:04.482 "subsystem": "nbd", 00:05:04.482 "config": [] 00:05:04.482 }, 00:05:04.482 { 00:05:04.482 "subsystem": "nvmf", 00:05:04.482 "config": [ 00:05:04.482 { 00:05:04.482 "method": "nvmf_set_config", 00:05:04.482 "params": { 00:05:04.482 "discovery_filter": "match_any", 00:05:04.482 "admin_cmd_passthru": { 00:05:04.482 "identify_ctrlr": false 00:05:04.482 } 00:05:04.482 } 00:05:04.482 }, 00:05:04.482 { 00:05:04.482 "method": "nvmf_set_max_subsystems", 00:05:04.482 "params": { 00:05:04.482 "max_subsystems": 1024 00:05:04.482 } 00:05:04.482 }, 00:05:04.482 { 00:05:04.482 "method": "nvmf_set_crdt", 00:05:04.482 "params": { 00:05:04.482 "crdt1": 0, 00:05:04.482 "crdt2": 0, 00:05:04.482 "crdt3": 0 00:05:04.482 } 00:05:04.482 }, 00:05:04.482 { 00:05:04.482 "method": "nvmf_create_transport", 00:05:04.482 "params": { 00:05:04.482 "trtype": "TCP", 00:05:04.482 "max_queue_depth": 128, 00:05:04.482 "max_io_qpairs_per_ctrlr": 127, 00:05:04.482 "in_capsule_data_size": 4096, 00:05:04.482 "max_io_size": 131072, 00:05:04.482 "io_unit_size": 131072, 00:05:04.482 "max_aq_depth": 128, 00:05:04.482 "num_shared_buffers": 511, 00:05:04.482 "buf_cache_size": 4294967295, 00:05:04.482 "dif_insert_or_strip": false, 00:05:04.482 "zcopy": false, 00:05:04.482 "c2h_success": true, 00:05:04.482 "sock_priority": 0, 00:05:04.482 "abort_timeout_sec": 1, 00:05:04.482 "ack_timeout": 0, 00:05:04.482 "data_wr_pool_size": 0 00:05:04.482 } 00:05:04.482 } 00:05:04.482 ] 00:05:04.482 }, 00:05:04.482 { 00:05:04.482 "subsystem": "iscsi", 00:05:04.482 "config": [ 00:05:04.482 { 00:05:04.482 "method": "iscsi_set_options", 00:05:04.482 "params": { 00:05:04.482 "node_base": "iqn.2016-06.io.spdk", 00:05:04.482 "max_sessions": 128, 00:05:04.482 "max_connections_per_session": 2, 00:05:04.482 "max_queue_depth": 64, 00:05:04.482 "default_time2wait": 2, 00:05:04.482 "default_time2retain": 20, 00:05:04.482 "first_burst_length": 8192, 00:05:04.482 "immediate_data": true, 00:05:04.482 "allow_duplicated_isid": false, 00:05:04.482 "error_recovery_level": 0, 00:05:04.482 "nop_timeout": 60, 00:05:04.482 "nop_in_interval": 30, 00:05:04.482 "disable_chap": false, 00:05:04.482 "require_chap": false, 00:05:04.482 "mutual_chap": false, 00:05:04.482 "chap_group": 0, 00:05:04.482 "max_large_datain_per_connection": 64, 00:05:04.482 "max_r2t_per_connection": 4, 00:05:04.482 "pdu_pool_size": 36864, 00:05:04.482 "immediate_data_pool_size": 16384, 00:05:04.482 "data_out_pool_size": 2048 00:05:04.482 } 00:05:04.482 } 00:05:04.482 ] 00:05:04.482 } 00:05:04.482 ] 00:05:04.482 } 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58895 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 58895 ']' 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 58895 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58895 00:05:04.482 killing process with pid 58895 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58895' 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 58895 00:05:04.482 16:53:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 58895 00:05:05.048 16:53:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58922 00:05:05.048 16:53:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:05.048 16:53:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58922 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 58922 ']' 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 58922 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58922 00:05:10.314 killing process with pid 58922 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58922' 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 58922 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 58922 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:10.314 ************************************ 00:05:10.314 END TEST skip_rpc_with_json 00:05:10.314 ************************************ 00:05:10.314 00:05:10.314 real 0m7.028s 00:05:10.314 user 0m6.681s 00:05:10.314 sys 0m0.668s 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 16:54:00 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.314 16:54:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:10.314 16:54:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.314 16:54:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.314 16:54:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 ************************************ 00:05:10.314 START TEST skip_rpc_with_delay 00:05:10.314 ************************************ 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.314 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.315 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.315 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.315 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.315 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.315 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:10.315 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.573 [2024-07-15 16:54:00.641705] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:10.573 [2024-07-15 16:54:00.641838] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:10.574 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:10.574 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.574 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:10.574 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.574 00:05:10.574 real 0m0.090s 00:05:10.574 user 0m0.060s 00:05:10.574 sys 0m0.029s 00:05:10.574 ************************************ 00:05:10.574 END TEST skip_rpc_with_delay 00:05:10.574 ************************************ 00:05:10.574 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.574 16:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:10.574 16:54:00 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.574 16:54:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:10.574 16:54:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:10.574 16:54:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:10.574 16:54:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.574 16:54:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.574 16:54:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.574 ************************************ 00:05:10.574 START TEST exit_on_failed_rpc_init 00:05:10.574 ************************************ 00:05:10.574 16:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:10.574 16:54:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59032 00:05:10.574 16:54:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.574 16:54:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59032 00:05:10.574 16:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59032 ']' 00:05:10.574 16:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.574 16:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.574 16:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.574 16:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.574 16:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.574 [2024-07-15 16:54:00.780122] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:10.574 [2024-07-15 16:54:00.780216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59032 ] 00:05:10.832 [2024-07-15 16:54:00.919157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.832 [2024-07-15 16:54:01.035925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.832 [2024-07-15 16:54:01.093517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:11.769 16:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.769 [2024-07-15 16:54:01.818296] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:11.769 [2024-07-15 16:54:01.818414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59050 ] 00:05:11.769 [2024-07-15 16:54:01.955500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.028 [2024-07-15 16:54:02.067853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.028 [2024-07-15 16:54:02.068215] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:12.028 [2024-07-15 16:54:02.068439] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:12.028 [2024-07-15 16:54:02.068682] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59032 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59032 ']' 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59032 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59032 00:05:12.028 killing process with pid 59032 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.028 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.029 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59032' 00:05:12.029 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59032 00:05:12.029 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59032 00:05:12.597 ************************************ 00:05:12.597 END TEST exit_on_failed_rpc_init 00:05:12.597 ************************************ 00:05:12.597 00:05:12.597 real 0m1.895s 00:05:12.597 user 0m2.228s 00:05:12.597 sys 0m0.439s 00:05:12.597 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.597 16:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.597 16:54:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:12.597 16:54:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:12.597 00:05:12.597 real 0m14.729s 00:05:12.597 user 0m14.123s 00:05:12.597 sys 0m1.596s 00:05:12.597 16:54:02 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.597 ************************************ 00:05:12.597 END TEST skip_rpc 00:05:12.597 ************************************ 00:05:12.597 16:54:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.597 16:54:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.597 16:54:02 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:12.597 16:54:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.597 16:54:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.597 16:54:02 -- common/autotest_common.sh@10 -- # set +x 00:05:12.597 ************************************ 00:05:12.597 START TEST rpc_client 00:05:12.597 ************************************ 00:05:12.597 16:54:02 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:12.597 * Looking for test storage... 00:05:12.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:12.597 16:54:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:12.597 OK 00:05:12.597 16:54:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:12.597 00:05:12.597 real 0m0.096s 00:05:12.597 user 0m0.046s 00:05:12.597 sys 0m0.056s 00:05:12.597 16:54:02 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.597 16:54:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:12.597 ************************************ 00:05:12.597 END TEST rpc_client 00:05:12.597 ************************************ 00:05:12.597 16:54:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.597 16:54:02 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:12.597 16:54:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.597 16:54:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.597 16:54:02 -- common/autotest_common.sh@10 -- # set +x 00:05:12.597 ************************************ 00:05:12.597 START TEST json_config 00:05:12.597 ************************************ 00:05:12.597 16:54:02 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.858 16:54:02 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.858 16:54:02 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.858 16:54:02 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.858 16:54:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.858 16:54:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.858 16:54:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.858 16:54:02 json_config -- paths/export.sh@5 -- # export PATH 00:05:12.858 16:54:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@47 -- # : 0 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:12.858 16:54:02 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:12.858 INFO: JSON configuration test init 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:12.858 16:54:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.858 16:54:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:12.858 16:54:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.858 16:54:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.858 16:54:02 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:12.858 16:54:02 json_config -- json_config/common.sh@9 -- # local app=target 00:05:12.858 16:54:02 json_config -- json_config/common.sh@10 -- # shift 00:05:12.858 16:54:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.858 16:54:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.858 16:54:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.858 16:54:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.858 16:54:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.858 16:54:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59168 00:05:12.858 16:54:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.858 Waiting for target to run... 00:05:12.858 16:54:02 json_config -- json_config/common.sh@25 -- # waitforlisten 59168 /var/tmp/spdk_tgt.sock 00:05:12.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.858 16:54:02 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:12.858 16:54:02 json_config -- common/autotest_common.sh@829 -- # '[' -z 59168 ']' 00:05:12.858 16:54:02 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.859 16:54:02 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.859 16:54:02 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.859 16:54:02 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.859 16:54:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.859 [2024-07-15 16:54:03.003887] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:12.859 [2024-07-15 16:54:03.003988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:05:13.425 [2024-07-15 16:54:03.430334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.425 [2024-07-15 16:54:03.511319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.746 00:05:13.746 16:54:03 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.746 16:54:03 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:13.746 16:54:03 json_config -- json_config/common.sh@26 -- # echo '' 00:05:13.746 16:54:03 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:13.746 16:54:03 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:13.746 16:54:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.746 16:54:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.746 16:54:03 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:13.746 16:54:03 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:13.746 16:54:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.746 16:54:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.746 16:54:03 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:13.746 16:54:03 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:13.746 16:54:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:14.045 [2024-07-15 16:54:04.215446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:14.304 16:54:04 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:14.304 16:54:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:14.304 16:54:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.304 16:54:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.304 16:54:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:14.304 16:54:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:14.304 16:54:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:14.304 16:54:04 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:14.304 16:54:04 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:14.304 16:54:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:14.563 16:54:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.563 16:54:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:14.563 16:54:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.563 16:54:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:14.563 16:54:04 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.563 16:54:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.822 MallocForNvmf0 00:05:14.822 16:54:04 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.822 16:54:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:15.081 MallocForNvmf1 00:05:15.081 16:54:05 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:15.081 16:54:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:15.339 [2024-07-15 16:54:05.518597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.339 16:54:05 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:15.339 16:54:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:15.597 16:54:05 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.597 16:54:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.855 16:54:05 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.855 16:54:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:16.113 16:54:06 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.113 16:54:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.372 [2024-07-15 16:54:06.415143] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:16.372 16:54:06 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:16.372 16:54:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.372 16:54:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.372 16:54:06 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:16.372 16:54:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.372 16:54:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.372 16:54:06 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:16.372 16:54:06 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:16.373 16:54:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:16.631 MallocBdevForConfigChangeCheck 00:05:16.631 16:54:06 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:16.631 16:54:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.631 16:54:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.631 16:54:06 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:16.631 16:54:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.889 INFO: shutting down applications... 00:05:16.889 16:54:07 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:16.889 16:54:07 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:16.889 16:54:07 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:16.889 16:54:07 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:16.889 16:54:07 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:17.455 Calling clear_iscsi_subsystem 00:05:17.455 Calling clear_nvmf_subsystem 00:05:17.455 Calling clear_nbd_subsystem 00:05:17.455 Calling clear_ublk_subsystem 00:05:17.455 Calling clear_vhost_blk_subsystem 00:05:17.455 Calling clear_vhost_scsi_subsystem 00:05:17.455 Calling clear_bdev_subsystem 00:05:17.455 16:54:07 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:17.455 16:54:07 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:17.455 16:54:07 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:17.455 16:54:07 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.455 16:54:07 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:17.455 16:54:07 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:17.713 16:54:07 json_config -- json_config/json_config.sh@345 -- # break 00:05:17.714 16:54:07 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:17.714 16:54:07 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:17.714 16:54:07 json_config -- json_config/common.sh@31 -- # local app=target 00:05:17.714 16:54:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:17.714 16:54:07 json_config -- json_config/common.sh@35 -- # [[ -n 59168 ]] 00:05:17.714 16:54:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59168 00:05:17.714 16:54:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:17.714 16:54:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.714 16:54:07 json_config -- json_config/common.sh@41 -- # kill -0 59168 00:05:17.714 16:54:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.280 16:54:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.280 16:54:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.280 16:54:08 json_config -- json_config/common.sh@41 -- # kill -0 59168 00:05:18.280 16:54:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:18.280 16:54:08 json_config -- json_config/common.sh@43 -- # break 00:05:18.280 SPDK target shutdown done 00:05:18.280 16:54:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:18.280 16:54:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:18.280 INFO: relaunching applications... 00:05:18.280 16:54:08 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:18.280 16:54:08 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:18.280 16:54:08 json_config -- json_config/common.sh@9 -- # local app=target 00:05:18.280 16:54:08 json_config -- json_config/common.sh@10 -- # shift 00:05:18.280 16:54:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.280 16:54:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.280 16:54:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.280 16:54:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.280 16:54:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.280 16:54:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59353 00:05:18.280 16:54:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:18.280 Waiting for target to run... 00:05:18.280 16:54:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.280 16:54:08 json_config -- json_config/common.sh@25 -- # waitforlisten 59353 /var/tmp/spdk_tgt.sock 00:05:18.280 16:54:08 json_config -- common/autotest_common.sh@829 -- # '[' -z 59353 ']' 00:05:18.280 16:54:08 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.280 16:54:08 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.280 16:54:08 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.280 16:54:08 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.280 16:54:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.280 [2024-07-15 16:54:08.444822] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:18.280 [2024-07-15 16:54:08.444918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59353 ] 00:05:18.846 [2024-07-15 16:54:08.902058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.846 [2024-07-15 16:54:08.990345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.846 [2024-07-15 16:54:09.116447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:19.104 [2024-07-15 16:54:09.322429] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.104 [2024-07-15 16:54:09.354498] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:19.104 16:54:09 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.104 00:05:19.104 16:54:09 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:19.104 16:54:09 json_config -- json_config/common.sh@26 -- # echo '' 00:05:19.104 16:54:09 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:19.104 16:54:09 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:19.104 INFO: Checking if target configuration is the same... 00:05:19.104 16:54:09 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:19.104 16:54:09 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:19.104 16:54:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.104 + '[' 2 -ne 2 ']' 00:05:19.104 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:19.104 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:19.104 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:19.104 +++ basename /dev/fd/62 00:05:19.361 ++ mktemp /tmp/62.XXX 00:05:19.361 + tmp_file_1=/tmp/62.ThB 00:05:19.361 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:19.361 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.361 + tmp_file_2=/tmp/spdk_tgt_config.json.07p 00:05:19.361 + ret=0 00:05:19.361 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:19.619 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:19.619 + diff -u /tmp/62.ThB /tmp/spdk_tgt_config.json.07p 00:05:19.619 INFO: JSON config files are the same 00:05:19.619 + echo 'INFO: JSON config files are the same' 00:05:19.619 + rm /tmp/62.ThB /tmp/spdk_tgt_config.json.07p 00:05:19.619 + exit 0 00:05:19.619 16:54:09 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:19.619 INFO: changing configuration and checking if this can be detected... 00:05:19.619 16:54:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:19.619 16:54:09 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.619 16:54:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.876 16:54:10 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:19.876 16:54:10 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:19.876 16:54:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.876 + '[' 2 -ne 2 ']' 00:05:19.876 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:19.876 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:19.876 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:19.876 +++ basename /dev/fd/62 00:05:19.876 ++ mktemp /tmp/62.XXX 00:05:19.876 + tmp_file_1=/tmp/62.5EN 00:05:19.876 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:19.876 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.876 + tmp_file_2=/tmp/spdk_tgt_config.json.RgJ 00:05:19.876 + ret=0 00:05:19.876 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:20.442 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:20.442 + diff -u /tmp/62.5EN /tmp/spdk_tgt_config.json.RgJ 00:05:20.442 + ret=1 00:05:20.442 + echo '=== Start of file: /tmp/62.5EN ===' 00:05:20.442 + cat /tmp/62.5EN 00:05:20.442 + echo '=== End of file: /tmp/62.5EN ===' 00:05:20.442 + echo '' 00:05:20.442 + echo '=== Start of file: /tmp/spdk_tgt_config.json.RgJ ===' 00:05:20.442 + cat /tmp/spdk_tgt_config.json.RgJ 00:05:20.442 + echo '=== End of file: /tmp/spdk_tgt_config.json.RgJ ===' 00:05:20.442 + echo '' 00:05:20.442 + rm /tmp/62.5EN /tmp/spdk_tgt_config.json.RgJ 00:05:20.442 + exit 1 00:05:20.443 INFO: configuration change detected. 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@317 -- # [[ -n 59353 ]] 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.443 16:54:10 json_config -- json_config/json_config.sh@323 -- # killprocess 59353 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@948 -- # '[' -z 59353 ']' 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@952 -- # kill -0 59353 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@953 -- # uname 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59353 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.443 killing process with pid 59353 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59353' 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@967 -- # kill 59353 00:05:20.443 16:54:10 json_config -- common/autotest_common.sh@972 -- # wait 59353 00:05:20.701 16:54:10 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:20.701 16:54:10 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:20.701 16:54:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.701 16:54:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.701 16:54:10 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:20.701 INFO: Success 00:05:20.701 16:54:10 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:20.701 00:05:20.701 real 0m8.066s 00:05:20.701 user 0m11.390s 00:05:20.701 sys 0m1.731s 00:05:20.701 16:54:10 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.701 16:54:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.701 ************************************ 00:05:20.701 END TEST json_config 00:05:20.701 ************************************ 00:05:20.701 16:54:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:20.701 16:54:10 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:20.701 16:54:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.701 16:54:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.701 16:54:10 -- common/autotest_common.sh@10 -- # set +x 00:05:20.701 ************************************ 00:05:20.701 START TEST json_config_extra_key 00:05:20.701 ************************************ 00:05:20.701 16:54:10 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:20.965 16:54:11 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.965 16:54:11 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.965 16:54:11 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.965 16:54:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.965 16:54:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.965 16:54:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.965 16:54:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:20.965 16:54:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:20.965 16:54:11 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:20.965 INFO: launching applications... 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:20.965 16:54:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:20.965 16:54:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:20.965 16:54:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:20.965 16:54:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.965 16:54:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.965 16:54:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.965 16:54:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.965 16:54:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.965 Waiting for target to run... 00:05:20.965 16:54:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59499 00:05:20.965 16:54:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.965 16:54:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59499 /var/tmp/spdk_tgt.sock 00:05:20.965 16:54:11 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59499 ']' 00:05:20.965 16:54:11 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.965 16:54:11 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.965 16:54:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:20.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.965 16:54:11 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.965 16:54:11 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.965 16:54:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.965 [2024-07-15 16:54:11.099624] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:20.965 [2024-07-15 16:54:11.099725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:05:21.232 [2024-07-15 16:54:11.525000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.490 [2024-07-15 16:54:11.611692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.490 [2024-07-15 16:54:11.632371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:22.055 00:05:22.055 INFO: shutting down applications... 00:05:22.055 16:54:12 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.055 16:54:12 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:22.055 16:54:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:22.055 16:54:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:22.055 16:54:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:22.055 16:54:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:22.055 16:54:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:22.055 16:54:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59499 ]] 00:05:22.055 16:54:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59499 00:05:22.055 16:54:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:22.055 16:54:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.055 16:54:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59499 00:05:22.055 16:54:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.621 16:54:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.621 16:54:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.621 16:54:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59499 00:05:22.621 16:54:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.621 16:54:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:22.621 16:54:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.621 16:54:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.621 SPDK target shutdown done 00:05:22.621 16:54:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:22.621 Success 00:05:22.621 00:05:22.621 real 0m1.664s 00:05:22.621 user 0m1.599s 00:05:22.621 sys 0m0.423s 00:05:22.621 16:54:12 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.621 ************************************ 00:05:22.621 END TEST json_config_extra_key 00:05:22.621 ************************************ 00:05:22.621 16:54:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:22.621 16:54:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:22.621 16:54:12 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.621 16:54:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.621 16:54:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.621 16:54:12 -- common/autotest_common.sh@10 -- # set +x 00:05:22.621 ************************************ 00:05:22.621 START TEST alias_rpc 00:05:22.621 ************************************ 00:05:22.621 16:54:12 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.621 * Looking for test storage... 00:05:22.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:22.621 16:54:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.621 16:54:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59569 00:05:22.621 16:54:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59569 00:05:22.621 16:54:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.621 16:54:12 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59569 ']' 00:05:22.621 16:54:12 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.621 16:54:12 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.621 16:54:12 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.622 16:54:12 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.622 16:54:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.622 [2024-07-15 16:54:12.816965] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:22.622 [2024-07-15 16:54:12.817082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59569 ] 00:05:22.880 [2024-07-15 16:54:12.956981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.880 [2024-07-15 16:54:13.049765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.880 [2024-07-15 16:54:13.106461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:23.816 16:54:13 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.816 16:54:13 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:23.816 16:54:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:23.816 16:54:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59569 00:05:23.816 16:54:14 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59569 ']' 00:05:23.816 16:54:14 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59569 00:05:23.816 16:54:14 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:23.816 16:54:14 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.816 16:54:14 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59569 00:05:23.816 16:54:14 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.816 killing process with pid 59569 00:05:23.816 16:54:14 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.816 16:54:14 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59569' 00:05:23.816 16:54:14 alias_rpc -- common/autotest_common.sh@967 -- # kill 59569 00:05:23.816 16:54:14 alias_rpc -- common/autotest_common.sh@972 -- # wait 59569 00:05:24.383 00:05:24.383 real 0m1.779s 00:05:24.383 user 0m2.035s 00:05:24.383 sys 0m0.399s 00:05:24.383 16:54:14 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.383 16:54:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.383 ************************************ 00:05:24.383 END TEST alias_rpc 00:05:24.383 ************************************ 00:05:24.383 16:54:14 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.383 16:54:14 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:24.383 16:54:14 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:24.383 16:54:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.383 16:54:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.383 16:54:14 -- common/autotest_common.sh@10 -- # set +x 00:05:24.383 ************************************ 00:05:24.383 START TEST spdkcli_tcp 00:05:24.383 ************************************ 00:05:24.383 16:54:14 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:24.383 * Looking for test storage... 00:05:24.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:24.383 16:54:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:24.383 16:54:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:24.383 16:54:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:24.383 16:54:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:24.383 16:54:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:24.383 16:54:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:24.383 16:54:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:24.383 16:54:14 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.383 16:54:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.383 16:54:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59640 00:05:24.383 16:54:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59640 00:05:24.383 16:54:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:24.383 16:54:14 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59640 ']' 00:05:24.383 16:54:14 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.383 16:54:14 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.383 16:54:14 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.383 16:54:14 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.383 16:54:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.383 [2024-07-15 16:54:14.648175] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:24.383 [2024-07-15 16:54:14.648249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59640 ] 00:05:24.642 [2024-07-15 16:54:14.779225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.642 [2024-07-15 16:54:14.874809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.642 [2024-07-15 16:54:14.874820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.642 [2024-07-15 16:54:14.930345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:25.579 16:54:15 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.579 16:54:15 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:25.579 16:54:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:25.579 16:54:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59657 00:05:25.579 16:54:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:25.579 [ 00:05:25.579 "bdev_malloc_delete", 00:05:25.579 "bdev_malloc_create", 00:05:25.579 "bdev_null_resize", 00:05:25.579 "bdev_null_delete", 00:05:25.579 "bdev_null_create", 00:05:25.579 "bdev_nvme_cuse_unregister", 00:05:25.579 "bdev_nvme_cuse_register", 00:05:25.579 "bdev_opal_new_user", 00:05:25.579 "bdev_opal_set_lock_state", 00:05:25.579 "bdev_opal_delete", 00:05:25.579 "bdev_opal_get_info", 00:05:25.579 "bdev_opal_create", 00:05:25.579 "bdev_nvme_opal_revert", 00:05:25.579 "bdev_nvme_opal_init", 00:05:25.579 "bdev_nvme_send_cmd", 00:05:25.579 "bdev_nvme_get_path_iostat", 00:05:25.579 "bdev_nvme_get_mdns_discovery_info", 00:05:25.579 "bdev_nvme_stop_mdns_discovery", 00:05:25.579 "bdev_nvme_start_mdns_discovery", 00:05:25.579 "bdev_nvme_set_multipath_policy", 00:05:25.579 "bdev_nvme_set_preferred_path", 00:05:25.579 "bdev_nvme_get_io_paths", 00:05:25.579 "bdev_nvme_remove_error_injection", 00:05:25.579 "bdev_nvme_add_error_injection", 00:05:25.579 "bdev_nvme_get_discovery_info", 00:05:25.579 "bdev_nvme_stop_discovery", 00:05:25.579 "bdev_nvme_start_discovery", 00:05:25.579 "bdev_nvme_get_controller_health_info", 00:05:25.579 "bdev_nvme_disable_controller", 00:05:25.579 "bdev_nvme_enable_controller", 00:05:25.579 "bdev_nvme_reset_controller", 00:05:25.579 "bdev_nvme_get_transport_statistics", 00:05:25.579 "bdev_nvme_apply_firmware", 00:05:25.579 "bdev_nvme_detach_controller", 00:05:25.579 "bdev_nvme_get_controllers", 00:05:25.579 "bdev_nvme_attach_controller", 00:05:25.579 "bdev_nvme_set_hotplug", 00:05:25.579 "bdev_nvme_set_options", 00:05:25.579 "bdev_passthru_delete", 00:05:25.579 "bdev_passthru_create", 00:05:25.579 "bdev_lvol_set_parent_bdev", 00:05:25.579 "bdev_lvol_set_parent", 00:05:25.579 "bdev_lvol_check_shallow_copy", 00:05:25.579 "bdev_lvol_start_shallow_copy", 00:05:25.579 "bdev_lvol_grow_lvstore", 00:05:25.579 "bdev_lvol_get_lvols", 00:05:25.579 "bdev_lvol_get_lvstores", 00:05:25.579 "bdev_lvol_delete", 00:05:25.579 "bdev_lvol_set_read_only", 00:05:25.579 "bdev_lvol_resize", 00:05:25.579 "bdev_lvol_decouple_parent", 00:05:25.579 "bdev_lvol_inflate", 00:05:25.579 "bdev_lvol_rename", 00:05:25.579 "bdev_lvol_clone_bdev", 00:05:25.579 "bdev_lvol_clone", 00:05:25.579 "bdev_lvol_snapshot", 00:05:25.579 "bdev_lvol_create", 00:05:25.579 "bdev_lvol_delete_lvstore", 00:05:25.579 "bdev_lvol_rename_lvstore", 00:05:25.579 "bdev_lvol_create_lvstore", 00:05:25.579 "bdev_raid_set_options", 00:05:25.579 "bdev_raid_remove_base_bdev", 00:05:25.579 "bdev_raid_add_base_bdev", 00:05:25.579 "bdev_raid_delete", 00:05:25.579 "bdev_raid_create", 00:05:25.579 "bdev_raid_get_bdevs", 00:05:25.579 "bdev_error_inject_error", 00:05:25.579 "bdev_error_delete", 00:05:25.579 "bdev_error_create", 00:05:25.580 "bdev_split_delete", 00:05:25.580 "bdev_split_create", 00:05:25.580 "bdev_delay_delete", 00:05:25.580 "bdev_delay_create", 00:05:25.580 "bdev_delay_update_latency", 00:05:25.580 "bdev_zone_block_delete", 00:05:25.580 "bdev_zone_block_create", 00:05:25.580 "blobfs_create", 00:05:25.580 "blobfs_detect", 00:05:25.580 "blobfs_set_cache_size", 00:05:25.580 "bdev_aio_delete", 00:05:25.580 "bdev_aio_rescan", 00:05:25.580 "bdev_aio_create", 00:05:25.580 "bdev_ftl_set_property", 00:05:25.580 "bdev_ftl_get_properties", 00:05:25.580 "bdev_ftl_get_stats", 00:05:25.580 "bdev_ftl_unmap", 00:05:25.580 "bdev_ftl_unload", 00:05:25.580 "bdev_ftl_delete", 00:05:25.580 "bdev_ftl_load", 00:05:25.580 "bdev_ftl_create", 00:05:25.580 "bdev_virtio_attach_controller", 00:05:25.580 "bdev_virtio_scsi_get_devices", 00:05:25.580 "bdev_virtio_detach_controller", 00:05:25.580 "bdev_virtio_blk_set_hotplug", 00:05:25.580 "bdev_iscsi_delete", 00:05:25.580 "bdev_iscsi_create", 00:05:25.580 "bdev_iscsi_set_options", 00:05:25.580 "bdev_uring_delete", 00:05:25.580 "bdev_uring_rescan", 00:05:25.580 "bdev_uring_create", 00:05:25.580 "accel_error_inject_error", 00:05:25.580 "ioat_scan_accel_module", 00:05:25.580 "dsa_scan_accel_module", 00:05:25.580 "iaa_scan_accel_module", 00:05:25.580 "keyring_file_remove_key", 00:05:25.580 "keyring_file_add_key", 00:05:25.580 "keyring_linux_set_options", 00:05:25.580 "iscsi_get_histogram", 00:05:25.580 "iscsi_enable_histogram", 00:05:25.580 "iscsi_set_options", 00:05:25.580 "iscsi_get_auth_groups", 00:05:25.580 "iscsi_auth_group_remove_secret", 00:05:25.580 "iscsi_auth_group_add_secret", 00:05:25.580 "iscsi_delete_auth_group", 00:05:25.580 "iscsi_create_auth_group", 00:05:25.580 "iscsi_set_discovery_auth", 00:05:25.580 "iscsi_get_options", 00:05:25.580 "iscsi_target_node_request_logout", 00:05:25.580 "iscsi_target_node_set_redirect", 00:05:25.580 "iscsi_target_node_set_auth", 00:05:25.580 "iscsi_target_node_add_lun", 00:05:25.580 "iscsi_get_stats", 00:05:25.580 "iscsi_get_connections", 00:05:25.580 "iscsi_portal_group_set_auth", 00:05:25.580 "iscsi_start_portal_group", 00:05:25.580 "iscsi_delete_portal_group", 00:05:25.580 "iscsi_create_portal_group", 00:05:25.580 "iscsi_get_portal_groups", 00:05:25.580 "iscsi_delete_target_node", 00:05:25.580 "iscsi_target_node_remove_pg_ig_maps", 00:05:25.580 "iscsi_target_node_add_pg_ig_maps", 00:05:25.580 "iscsi_create_target_node", 00:05:25.580 "iscsi_get_target_nodes", 00:05:25.580 "iscsi_delete_initiator_group", 00:05:25.580 "iscsi_initiator_group_remove_initiators", 00:05:25.580 "iscsi_initiator_group_add_initiators", 00:05:25.580 "iscsi_create_initiator_group", 00:05:25.580 "iscsi_get_initiator_groups", 00:05:25.580 "nvmf_set_crdt", 00:05:25.580 "nvmf_set_config", 00:05:25.580 "nvmf_set_max_subsystems", 00:05:25.580 "nvmf_stop_mdns_prr", 00:05:25.580 "nvmf_publish_mdns_prr", 00:05:25.580 "nvmf_subsystem_get_listeners", 00:05:25.580 "nvmf_subsystem_get_qpairs", 00:05:25.580 "nvmf_subsystem_get_controllers", 00:05:25.580 "nvmf_get_stats", 00:05:25.580 "nvmf_get_transports", 00:05:25.580 "nvmf_create_transport", 00:05:25.580 "nvmf_get_targets", 00:05:25.580 "nvmf_delete_target", 00:05:25.580 "nvmf_create_target", 00:05:25.580 "nvmf_subsystem_allow_any_host", 00:05:25.580 "nvmf_subsystem_remove_host", 00:05:25.580 "nvmf_subsystem_add_host", 00:05:25.580 "nvmf_ns_remove_host", 00:05:25.580 "nvmf_ns_add_host", 00:05:25.580 "nvmf_subsystem_remove_ns", 00:05:25.580 "nvmf_subsystem_add_ns", 00:05:25.580 "nvmf_subsystem_listener_set_ana_state", 00:05:25.580 "nvmf_discovery_get_referrals", 00:05:25.580 "nvmf_discovery_remove_referral", 00:05:25.580 "nvmf_discovery_add_referral", 00:05:25.580 "nvmf_subsystem_remove_listener", 00:05:25.580 "nvmf_subsystem_add_listener", 00:05:25.580 "nvmf_delete_subsystem", 00:05:25.580 "nvmf_create_subsystem", 00:05:25.580 "nvmf_get_subsystems", 00:05:25.580 "env_dpdk_get_mem_stats", 00:05:25.580 "nbd_get_disks", 00:05:25.580 "nbd_stop_disk", 00:05:25.580 "nbd_start_disk", 00:05:25.580 "ublk_recover_disk", 00:05:25.580 "ublk_get_disks", 00:05:25.580 "ublk_stop_disk", 00:05:25.580 "ublk_start_disk", 00:05:25.580 "ublk_destroy_target", 00:05:25.580 "ublk_create_target", 00:05:25.580 "virtio_blk_create_transport", 00:05:25.580 "virtio_blk_get_transports", 00:05:25.580 "vhost_controller_set_coalescing", 00:05:25.580 "vhost_get_controllers", 00:05:25.580 "vhost_delete_controller", 00:05:25.580 "vhost_create_blk_controller", 00:05:25.580 "vhost_scsi_controller_remove_target", 00:05:25.580 "vhost_scsi_controller_add_target", 00:05:25.580 "vhost_start_scsi_controller", 00:05:25.580 "vhost_create_scsi_controller", 00:05:25.580 "thread_set_cpumask", 00:05:25.580 "framework_get_governor", 00:05:25.580 "framework_get_scheduler", 00:05:25.580 "framework_set_scheduler", 00:05:25.580 "framework_get_reactors", 00:05:25.580 "thread_get_io_channels", 00:05:25.580 "thread_get_pollers", 00:05:25.580 "thread_get_stats", 00:05:25.580 "framework_monitor_context_switch", 00:05:25.580 "spdk_kill_instance", 00:05:25.580 "log_enable_timestamps", 00:05:25.580 "log_get_flags", 00:05:25.580 "log_clear_flag", 00:05:25.580 "log_set_flag", 00:05:25.580 "log_get_level", 00:05:25.580 "log_set_level", 00:05:25.580 "log_get_print_level", 00:05:25.580 "log_set_print_level", 00:05:25.580 "framework_enable_cpumask_locks", 00:05:25.580 "framework_disable_cpumask_locks", 00:05:25.580 "framework_wait_init", 00:05:25.580 "framework_start_init", 00:05:25.580 "scsi_get_devices", 00:05:25.580 "bdev_get_histogram", 00:05:25.580 "bdev_enable_histogram", 00:05:25.580 "bdev_set_qos_limit", 00:05:25.580 "bdev_set_qd_sampling_period", 00:05:25.580 "bdev_get_bdevs", 00:05:25.580 "bdev_reset_iostat", 00:05:25.580 "bdev_get_iostat", 00:05:25.580 "bdev_examine", 00:05:25.580 "bdev_wait_for_examine", 00:05:25.580 "bdev_set_options", 00:05:25.580 "notify_get_notifications", 00:05:25.580 "notify_get_types", 00:05:25.580 "accel_get_stats", 00:05:25.580 "accel_set_options", 00:05:25.580 "accel_set_driver", 00:05:25.580 "accel_crypto_key_destroy", 00:05:25.580 "accel_crypto_keys_get", 00:05:25.580 "accel_crypto_key_create", 00:05:25.580 "accel_assign_opc", 00:05:25.580 "accel_get_module_info", 00:05:25.580 "accel_get_opc_assignments", 00:05:25.580 "vmd_rescan", 00:05:25.580 "vmd_remove_device", 00:05:25.580 "vmd_enable", 00:05:25.580 "sock_get_default_impl", 00:05:25.580 "sock_set_default_impl", 00:05:25.580 "sock_impl_set_options", 00:05:25.580 "sock_impl_get_options", 00:05:25.580 "iobuf_get_stats", 00:05:25.580 "iobuf_set_options", 00:05:25.580 "framework_get_pci_devices", 00:05:25.580 "framework_get_config", 00:05:25.580 "framework_get_subsystems", 00:05:25.580 "trace_get_info", 00:05:25.580 "trace_get_tpoint_group_mask", 00:05:25.580 "trace_disable_tpoint_group", 00:05:25.580 "trace_enable_tpoint_group", 00:05:25.580 "trace_clear_tpoint_mask", 00:05:25.580 "trace_set_tpoint_mask", 00:05:25.580 "keyring_get_keys", 00:05:25.580 "spdk_get_version", 00:05:25.580 "rpc_get_methods" 00:05:25.580 ] 00:05:25.580 16:54:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:25.580 16:54:15 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.580 16:54:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.839 16:54:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:25.839 16:54:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59640 00:05:25.839 16:54:15 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59640 ']' 00:05:25.839 16:54:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59640 00:05:25.839 16:54:15 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:25.839 16:54:15 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.839 16:54:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59640 00:05:25.839 16:54:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.839 killing process with pid 59640 00:05:25.839 16:54:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.839 16:54:15 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59640' 00:05:25.839 16:54:15 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59640 00:05:25.839 16:54:15 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59640 00:05:26.098 00:05:26.098 real 0m1.792s 00:05:26.098 user 0m3.333s 00:05:26.098 sys 0m0.448s 00:05:26.098 16:54:16 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.098 16:54:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.098 ************************************ 00:05:26.098 END TEST spdkcli_tcp 00:05:26.098 ************************************ 00:05:26.098 16:54:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.098 16:54:16 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.098 16:54:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.098 16:54:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.098 16:54:16 -- common/autotest_common.sh@10 -- # set +x 00:05:26.098 ************************************ 00:05:26.098 START TEST dpdk_mem_utility 00:05:26.098 ************************************ 00:05:26.098 16:54:16 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.357 * Looking for test storage... 00:05:26.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:26.357 16:54:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:26.357 16:54:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59725 00:05:26.357 16:54:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59725 00:05:26.357 16:54:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.357 16:54:16 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59725 ']' 00:05:26.357 16:54:16 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.357 16:54:16 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.357 16:54:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.357 16:54:16 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.357 16:54:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.357 [2024-07-15 16:54:16.500447] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:26.357 [2024-07-15 16:54:16.500538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59725 ] 00:05:26.357 [2024-07-15 16:54:16.640151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.616 [2024-07-15 16:54:16.767844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.616 [2024-07-15 16:54:16.825989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:27.185 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.185 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:27.185 16:54:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.185 16:54:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.185 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.185 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.185 { 00:05:27.185 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.185 } 00:05:27.185 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.185 16:54:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:27.185 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:27.185 1 heaps totaling size 814.000000 MiB 00:05:27.185 size: 814.000000 MiB heap id: 0 00:05:27.185 end heaps---------- 00:05:27.185 8 mempools totaling size 598.116089 MiB 00:05:27.185 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.185 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.185 size: 84.521057 MiB name: bdev_io_59725 00:05:27.185 size: 51.011292 MiB name: evtpool_59725 00:05:27.185 size: 50.003479 MiB name: msgpool_59725 00:05:27.185 size: 21.763794 MiB name: PDU_Pool 00:05:27.185 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.185 size: 0.026123 MiB name: Session_Pool 00:05:27.185 end mempools------- 00:05:27.185 6 memzones totaling size 4.142822 MiB 00:05:27.185 size: 1.000366 MiB name: RG_ring_0_59725 00:05:27.185 size: 1.000366 MiB name: RG_ring_1_59725 00:05:27.185 size: 1.000366 MiB name: RG_ring_4_59725 00:05:27.185 size: 1.000366 MiB name: RG_ring_5_59725 00:05:27.185 size: 0.125366 MiB name: RG_ring_2_59725 00:05:27.185 size: 0.015991 MiB name: RG_ring_3_59725 00:05:27.185 end memzones------- 00:05:27.185 16:54:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:27.481 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:05:27.481 list of free elements. size: 12.471375 MiB 00:05:27.481 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:27.481 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:27.481 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:27.481 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:27.481 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:27.481 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:27.481 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:27.481 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:27.481 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:27.481 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:05:27.481 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:27.481 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:27.481 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:27.481 element at address: 0x200027e00000 with size: 0.395935 MiB 00:05:27.481 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:27.481 list of standard malloc elements. size: 199.266052 MiB 00:05:27.481 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:27.481 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:27.481 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:27.481 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:27.481 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:27.481 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:27.481 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:27.481 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:27.481 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:27.481 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:27.481 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:27.481 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:27.481 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:27.482 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e65680 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:27.482 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:27.483 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:27.483 list of memzone associated elements. size: 602.262573 MiB 00:05:27.483 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:27.483 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:27.483 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:27.483 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:27.483 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:27.483 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59725_0 00:05:27.483 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:27.483 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59725_0 00:05:27.483 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:27.483 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59725_0 00:05:27.483 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:27.483 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:27.483 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:27.483 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:27.483 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:27.483 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59725 00:05:27.483 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:27.483 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59725 00:05:27.483 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:27.483 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59725 00:05:27.483 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:27.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:27.483 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:27.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:27.483 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:27.483 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:27.483 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:27.483 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:27.483 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:27.483 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59725 00:05:27.483 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:27.483 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59725 00:05:27.483 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:27.483 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59725 00:05:27.483 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:27.483 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59725 00:05:27.483 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:27.483 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59725 00:05:27.483 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:27.483 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:27.483 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:27.483 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:27.483 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:27.483 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:27.483 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:27.483 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59725 00:05:27.483 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:27.483 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:27.483 element at address: 0x200027e65740 with size: 0.023743 MiB 00:05:27.483 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:27.483 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:27.483 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59725 00:05:27.483 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:05:27.483 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:27.483 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:27.483 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59725 00:05:27.483 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:27.483 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59725 00:05:27.483 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:05:27.483 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:27.483 16:54:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:27.483 16:54:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59725 00:05:27.483 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59725 ']' 00:05:27.483 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59725 00:05:27.483 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:27.483 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.483 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59725 00:05:27.483 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.483 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.483 killing process with pid 59725 00:05:27.483 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59725' 00:05:27.483 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59725 00:05:27.483 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59725 00:05:27.742 00:05:27.742 real 0m1.572s 00:05:27.742 user 0m1.617s 00:05:27.742 sys 0m0.423s 00:05:27.742 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.742 16:54:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.742 ************************************ 00:05:27.742 END TEST dpdk_mem_utility 00:05:27.742 ************************************ 00:05:27.742 16:54:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.742 16:54:17 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:27.742 16:54:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.742 16:54:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.742 16:54:17 -- common/autotest_common.sh@10 -- # set +x 00:05:27.742 ************************************ 00:05:27.742 START TEST event 00:05:27.742 ************************************ 00:05:27.742 16:54:17 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:28.003 * Looking for test storage... 00:05:28.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:28.003 16:54:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:28.003 16:54:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.003 16:54:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.003 16:54:18 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:28.003 16:54:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.003 16:54:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.003 ************************************ 00:05:28.003 START TEST event_perf 00:05:28.003 ************************************ 00:05:28.003 16:54:18 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.003 Running I/O for 1 seconds...[2024-07-15 16:54:18.085664] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:28.003 [2024-07-15 16:54:18.085750] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59802 ] 00:05:28.003 [2024-07-15 16:54:18.224322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.262 [2024-07-15 16:54:18.332312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.262 [2024-07-15 16:54:18.332472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.262 [2024-07-15 16:54:18.332538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.262 Running I/O for 1 seconds...[2024-07-15 16:54:18.332538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.197 00:05:29.197 lcore 0: 206836 00:05:29.197 lcore 1: 206837 00:05:29.197 lcore 2: 206837 00:05:29.197 lcore 3: 206836 00:05:29.197 done. 00:05:29.197 00:05:29.197 real 0m1.347s 00:05:29.197 user 0m4.176s 00:05:29.197 sys 0m0.053s 00:05:29.197 16:54:19 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.197 16:54:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.197 ************************************ 00:05:29.197 END TEST event_perf 00:05:29.197 ************************************ 00:05:29.197 16:54:19 event -- common/autotest_common.sh@1142 -- # return 0 00:05:29.197 16:54:19 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:29.197 16:54:19 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:29.197 16:54:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.197 16:54:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.197 ************************************ 00:05:29.197 START TEST event_reactor 00:05:29.197 ************************************ 00:05:29.197 16:54:19 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:29.197 [2024-07-15 16:54:19.484149] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:29.197 [2024-07-15 16:54:19.484225] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59835 ] 00:05:29.456 [2024-07-15 16:54:19.618130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.456 [2024-07-15 16:54:19.712537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.832 test_start 00:05:30.832 oneshot 00:05:30.832 tick 100 00:05:30.832 tick 100 00:05:30.832 tick 250 00:05:30.832 tick 100 00:05:30.833 tick 100 00:05:30.833 tick 100 00:05:30.833 tick 250 00:05:30.833 tick 500 00:05:30.833 tick 100 00:05:30.833 tick 100 00:05:30.833 tick 250 00:05:30.833 tick 100 00:05:30.833 tick 100 00:05:30.833 test_end 00:05:30.833 00:05:30.833 real 0m1.327s 00:05:30.833 user 0m1.168s 00:05:30.833 sys 0m0.053s 00:05:30.833 16:54:20 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.833 16:54:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:30.833 ************************************ 00:05:30.833 END TEST event_reactor 00:05:30.833 ************************************ 00:05:30.833 16:54:20 event -- common/autotest_common.sh@1142 -- # return 0 00:05:30.833 16:54:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.833 16:54:20 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:30.833 16:54:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.833 16:54:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.833 ************************************ 00:05:30.833 START TEST event_reactor_perf 00:05:30.833 ************************************ 00:05:30.833 16:54:20 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.833 [2024-07-15 16:54:20.865269] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:30.833 [2024-07-15 16:54:20.865398] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59871 ] 00:05:30.833 [2024-07-15 16:54:20.998176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.833 [2024-07-15 16:54:21.103637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.209 test_start 00:05:32.209 test_end 00:05:32.209 Performance: 415203 events per second 00:05:32.209 00:05:32.209 real 0m1.331s 00:05:32.209 user 0m1.173s 00:05:32.209 sys 0m0.052s 00:05:32.209 16:54:22 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.209 ************************************ 00:05:32.209 END TEST event_reactor_perf 00:05:32.209 16:54:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.209 ************************************ 00:05:32.209 16:54:22 event -- common/autotest_common.sh@1142 -- # return 0 00:05:32.209 16:54:22 event -- event/event.sh@49 -- # uname -s 00:05:32.209 16:54:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.209 16:54:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:32.209 16:54:22 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.209 16:54:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.209 16:54:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.209 ************************************ 00:05:32.209 START TEST event_scheduler 00:05:32.209 ************************************ 00:05:32.209 16:54:22 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:32.209 * Looking for test storage... 00:05:32.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:32.209 16:54:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:32.209 16:54:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59932 00:05:32.209 16:54:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.209 16:54:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59932 00:05:32.209 16:54:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:32.209 16:54:22 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 59932 ']' 00:05:32.209 16:54:22 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.209 16:54:22 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.209 16:54:22 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.209 16:54:22 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.209 16:54:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.209 [2024-07-15 16:54:22.364288] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:32.209 [2024-07-15 16:54:22.364421] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59932 ] 00:05:32.209 [2024-07-15 16:54:22.504242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.467 [2024-07-15 16:54:22.612760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.467 [2024-07-15 16:54:22.612946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.467 [2024-07-15 16:54:22.613079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.467 [2024-07-15 16:54:22.613081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.401 16:54:23 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.401 16:54:23 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:33.401 16:54:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:33.401 16:54:23 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.401 16:54:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.401 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.401 POWER: Cannot set governor of lcore 0 to userspace 00:05:33.401 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.401 POWER: Cannot set governor of lcore 0 to performance 00:05:33.401 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.401 POWER: Cannot set governor of lcore 0 to userspace 00:05:33.401 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.401 POWER: Cannot set governor of lcore 0 to userspace 00:05:33.401 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:33.401 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:33.401 POWER: Unable to set Power Management Environment for lcore 0 00:05:33.401 [2024-07-15 16:54:23.358855] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:33.401 [2024-07-15 16:54:23.358868] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:33.401 [2024-07-15 16:54:23.358877] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:33.401 [2024-07-15 16:54:23.358889] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:33.401 [2024-07-15 16:54:23.358896] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:33.401 [2024-07-15 16:54:23.358903] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:33.401 16:54:23 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.401 16:54:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:33.401 16:54:23 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.401 16:54:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.401 [2024-07-15 16:54:23.421556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:33.401 [2024-07-15 16:54:23.455215] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:33.401 16:54:23 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.401 16:54:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:33.402 16:54:23 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.402 16:54:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 ************************************ 00:05:33.402 START TEST scheduler_create_thread 00:05:33.402 ************************************ 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 2 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 3 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 4 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 5 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 6 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 7 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 8 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 9 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 10 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.402 16:54:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.828 16:54:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.829 16:54:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:34.829 16:54:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:34.829 16:54:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.829 16:54:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.205 ************************************ 00:05:36.205 END TEST scheduler_create_thread 00:05:36.205 ************************************ 00:05:36.205 16:54:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.205 00:05:36.205 real 0m2.612s 00:05:36.205 user 0m0.018s 00:05:36.205 sys 0m0.006s 00:05:36.205 16:54:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.205 16:54:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.205 16:54:26 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:36.205 16:54:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.205 16:54:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59932 00:05:36.205 16:54:26 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 59932 ']' 00:05:36.205 16:54:26 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 59932 00:05:36.205 16:54:26 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:36.205 16:54:26 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.205 16:54:26 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59932 00:05:36.205 killing process with pid 59932 00:05:36.205 16:54:26 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:36.205 16:54:26 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:36.205 16:54:26 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59932' 00:05:36.205 16:54:26 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 59932 00:05:36.205 16:54:26 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 59932 00:05:36.465 [2024-07-15 16:54:26.559352] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:36.724 ************************************ 00:05:36.724 END TEST event_scheduler 00:05:36.724 ************************************ 00:05:36.724 00:05:36.724 real 0m4.555s 00:05:36.724 user 0m8.666s 00:05:36.724 sys 0m0.351s 00:05:36.724 16:54:26 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.724 16:54:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.724 16:54:26 event -- common/autotest_common.sh@1142 -- # return 0 00:05:36.724 16:54:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:36.724 16:54:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:36.724 16:54:26 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.724 16:54:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.724 16:54:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.724 ************************************ 00:05:36.724 START TEST app_repeat 00:05:36.724 ************************************ 00:05:36.724 16:54:26 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60032 00:05:36.724 Process app_repeat pid: 60032 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60032' 00:05:36.724 spdk_app_start Round 0 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:36.724 16:54:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60032 /var/tmp/spdk-nbd.sock 00:05:36.724 16:54:26 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60032 ']' 00:05:36.724 16:54:26 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.724 16:54:26 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.724 16:54:26 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.724 16:54:26 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.724 16:54:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.725 [2024-07-15 16:54:26.865599] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:36.725 [2024-07-15 16:54:26.865699] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60032 ] 00:05:36.725 [2024-07-15 16:54:26.995182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.983 [2024-07-15 16:54:27.104729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.984 [2024-07-15 16:54:27.104738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.984 [2024-07-15 16:54:27.159239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:37.920 16:54:27 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.920 16:54:27 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:37.920 16:54:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.920 Malloc0 00:05:37.920 16:54:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.179 Malloc1 00:05:38.179 16:54:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.179 16:54:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.179 16:54:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.179 16:54:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.179 16:54:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.180 16:54:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.180 16:54:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.180 16:54:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.180 16:54:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.180 16:54:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.180 16:54:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.180 16:54:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.180 16:54:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.180 16:54:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.180 16:54:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.180 16:54:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.439 /dev/nbd0 00:05:38.439 16:54:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.439 16:54:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.439 1+0 records in 00:05:38.439 1+0 records out 00:05:38.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275889 s, 14.8 MB/s 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.439 16:54:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:38.439 16:54:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.439 16:54:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.439 16:54:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.698 /dev/nbd1 00:05:38.698 16:54:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.698 16:54:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.698 1+0 records in 00:05:38.698 1+0 records out 00:05:38.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243083 s, 16.9 MB/s 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.698 16:54:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:38.698 16:54:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.698 16:54:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.698 16:54:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.698 16:54:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.698 16:54:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.956 16:54:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.956 { 00:05:38.956 "nbd_device": "/dev/nbd0", 00:05:38.956 "bdev_name": "Malloc0" 00:05:38.956 }, 00:05:38.956 { 00:05:38.956 "nbd_device": "/dev/nbd1", 00:05:38.956 "bdev_name": "Malloc1" 00:05:38.956 } 00:05:38.956 ]' 00:05:38.956 16:54:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.956 { 00:05:38.956 "nbd_device": "/dev/nbd0", 00:05:38.956 "bdev_name": "Malloc0" 00:05:38.956 }, 00:05:38.956 { 00:05:38.956 "nbd_device": "/dev/nbd1", 00:05:38.956 "bdev_name": "Malloc1" 00:05:38.956 } 00:05:38.956 ]' 00:05:38.956 16:54:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.956 16:54:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.956 /dev/nbd1' 00:05:38.956 16:54:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.956 16:54:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.956 /dev/nbd1' 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.215 256+0 records in 00:05:39.215 256+0 records out 00:05:39.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107888 s, 97.2 MB/s 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.215 256+0 records in 00:05:39.215 256+0 records out 00:05:39.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254105 s, 41.3 MB/s 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.215 256+0 records in 00:05:39.215 256+0 records out 00:05:39.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227358 s, 46.1 MB/s 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.215 16:54:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.473 16:54:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.473 16:54:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.473 16:54:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.473 16:54:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.473 16:54:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.473 16:54:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.473 16:54:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.473 16:54:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.473 16:54:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.473 16:54:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.731 16:54:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.731 16:54:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.731 16:54:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.731 16:54:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.731 16:54:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.731 16:54:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.731 16:54:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.731 16:54:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.731 16:54:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.731 16:54:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.731 16:54:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.989 16:54:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.989 16:54:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.247 16:54:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.506 [2024-07-15 16:54:30.639137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.506 [2024-07-15 16:54:30.718450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.506 [2024-07-15 16:54:30.718477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.506 [2024-07-15 16:54:30.772751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:40.506 [2024-07-15 16:54:30.772847] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.506 [2024-07-15 16:54:30.772860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.790 spdk_app_start Round 1 00:05:43.790 16:54:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.790 16:54:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.790 16:54:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60032 /var/tmp/spdk-nbd.sock 00:05:43.790 16:54:33 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60032 ']' 00:05:43.790 16:54:33 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.790 16:54:33 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.790 16:54:33 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.790 16:54:33 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.790 16:54:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.790 16:54:33 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.790 16:54:33 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:43.790 16:54:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.790 Malloc0 00:05:43.790 16:54:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.048 Malloc1 00:05:44.048 16:54:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.048 16:54:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.306 /dev/nbd0 00:05:44.306 16:54:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.306 16:54:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.306 1+0 records in 00:05:44.306 1+0 records out 00:05:44.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281412 s, 14.6 MB/s 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:44.306 16:54:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:44.306 16:54:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.306 16:54:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.306 16:54:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.563 /dev/nbd1 00:05:44.563 16:54:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.563 16:54:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.563 1+0 records in 00:05:44.563 1+0 records out 00:05:44.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450654 s, 9.1 MB/s 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:44.563 16:54:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:44.563 16:54:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.563 16:54:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.563 16:54:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.563 16:54:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.563 16:54:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.821 16:54:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.821 { 00:05:44.821 "nbd_device": "/dev/nbd0", 00:05:44.821 "bdev_name": "Malloc0" 00:05:44.821 }, 00:05:44.821 { 00:05:44.821 "nbd_device": "/dev/nbd1", 00:05:44.821 "bdev_name": "Malloc1" 00:05:44.822 } 00:05:44.822 ]' 00:05:44.822 16:54:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.822 { 00:05:44.822 "nbd_device": "/dev/nbd0", 00:05:44.822 "bdev_name": "Malloc0" 00:05:44.822 }, 00:05:44.822 { 00:05:44.822 "nbd_device": "/dev/nbd1", 00:05:44.822 "bdev_name": "Malloc1" 00:05:44.822 } 00:05:44.822 ]' 00:05:44.822 16:54:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.822 /dev/nbd1' 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.822 /dev/nbd1' 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.822 256+0 records in 00:05:44.822 256+0 records out 00:05:44.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104768 s, 100 MB/s 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.822 256+0 records in 00:05:44.822 256+0 records out 00:05:44.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231484 s, 45.3 MB/s 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.822 256+0 records in 00:05:44.822 256+0 records out 00:05:44.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233831 s, 44.8 MB/s 00:05:44.822 16:54:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.081 16:54:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.339 16:54:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.339 16:54:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.339 16:54:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.339 16:54:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.339 16:54:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.339 16:54:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.339 16:54:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.339 16:54:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.339 16:54:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.339 16:54:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.597 16:54:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.597 16:54:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.597 16:54:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.597 16:54:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.597 16:54:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.597 16:54:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.597 16:54:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.597 16:54:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.597 16:54:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.597 16:54:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.597 16:54:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.855 16:54:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.855 16:54:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.855 16:54:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.855 16:54:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.855 16:54:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.855 16:54:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.855 16:54:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.855 16:54:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.855 16:54:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.855 16:54:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.855 16:54:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.855 16:54:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.855 16:54:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.112 16:54:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.370 [2024-07-15 16:54:36.527791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.370 [2024-07-15 16:54:36.633716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.370 [2024-07-15 16:54:36.633727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.628 [2024-07-15 16:54:36.686855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:46.628 [2024-07-15 16:54:36.686936] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.628 [2024-07-15 16:54:36.686951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.159 16:54:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.159 spdk_app_start Round 2 00:05:49.159 16:54:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:49.159 16:54:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60032 /var/tmp/spdk-nbd.sock 00:05:49.159 16:54:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60032 ']' 00:05:49.159 16:54:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.159 16:54:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.159 16:54:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.159 16:54:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.159 16:54:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.417 16:54:39 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.417 16:54:39 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:49.417 16:54:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.700 Malloc0 00:05:49.700 16:54:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.959 Malloc1 00:05:49.959 16:54:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.959 16:54:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.215 /dev/nbd0 00:05:50.215 16:54:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.472 16:54:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.472 1+0 records in 00:05:50.472 1+0 records out 00:05:50.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275252 s, 14.9 MB/s 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.472 16:54:40 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:50.472 16:54:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.472 16:54:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.472 16:54:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.731 /dev/nbd1 00:05:50.731 16:54:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.731 16:54:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.731 1+0 records in 00:05:50.731 1+0 records out 00:05:50.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258826 s, 15.8 MB/s 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.731 16:54:40 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:50.731 16:54:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.731 16:54:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.731 16:54:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.731 16:54:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.731 16:54:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.020 { 00:05:51.020 "nbd_device": "/dev/nbd0", 00:05:51.020 "bdev_name": "Malloc0" 00:05:51.020 }, 00:05:51.020 { 00:05:51.020 "nbd_device": "/dev/nbd1", 00:05:51.020 "bdev_name": "Malloc1" 00:05:51.020 } 00:05:51.020 ]' 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.020 { 00:05:51.020 "nbd_device": "/dev/nbd0", 00:05:51.020 "bdev_name": "Malloc0" 00:05:51.020 }, 00:05:51.020 { 00:05:51.020 "nbd_device": "/dev/nbd1", 00:05:51.020 "bdev_name": "Malloc1" 00:05:51.020 } 00:05:51.020 ]' 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.020 /dev/nbd1' 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.020 /dev/nbd1' 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.020 256+0 records in 00:05:51.020 256+0 records out 00:05:51.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00753832 s, 139 MB/s 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.020 256+0 records in 00:05:51.020 256+0 records out 00:05:51.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212782 s, 49.3 MB/s 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.020 256+0 records in 00:05:51.020 256+0 records out 00:05:51.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237399 s, 44.2 MB/s 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.020 16:54:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.280 16:54:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.280 16:54:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.280 16:54:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.280 16:54:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.280 16:54:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.280 16:54:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.280 16:54:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.280 16:54:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.280 16:54:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.280 16:54:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.538 16:54:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.538 16:54:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.538 16:54:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.538 16:54:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.538 16:54:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.538 16:54:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.538 16:54:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.538 16:54:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.538 16:54:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.538 16:54:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.538 16:54:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.796 16:54:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.796 16:54:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.796 16:54:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.796 16:54:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.796 16:54:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.796 16:54:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.797 16:54:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.797 16:54:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.797 16:54:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.797 16:54:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.797 16:54:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.797 16:54:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.797 16:54:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.363 16:54:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.363 [2024-07-15 16:54:42.585267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.621 [2024-07-15 16:54:42.685690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.621 [2024-07-15 16:54:42.685702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.621 [2024-07-15 16:54:42.738798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.621 [2024-07-15 16:54:42.738886] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.621 [2024-07-15 16:54:42.738901] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.146 16:54:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60032 /var/tmp/spdk-nbd.sock 00:05:55.146 16:54:45 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60032 ']' 00:05:55.146 16:54:45 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.146 16:54:45 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.146 16:54:45 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.147 16:54:45 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.147 16:54:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:55.403 16:54:45 event.app_repeat -- event/event.sh@39 -- # killprocess 60032 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60032 ']' 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60032 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60032 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.403 killing process with pid 60032 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60032' 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60032 00:05:55.403 16:54:45 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60032 00:05:55.661 spdk_app_start is called in Round 0. 00:05:55.661 Shutdown signal received, stop current app iteration 00:05:55.661 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:05:55.661 spdk_app_start is called in Round 1. 00:05:55.661 Shutdown signal received, stop current app iteration 00:05:55.661 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:05:55.661 spdk_app_start is called in Round 2. 00:05:55.661 Shutdown signal received, stop current app iteration 00:05:55.661 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:05:55.661 spdk_app_start is called in Round 3. 00:05:55.661 Shutdown signal received, stop current app iteration 00:05:55.661 16:54:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:55.661 16:54:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:55.661 00:05:55.661 real 0m19.046s 00:05:55.661 user 0m42.862s 00:05:55.661 sys 0m2.805s 00:05:55.661 16:54:45 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.661 ************************************ 00:05:55.661 END TEST app_repeat 00:05:55.661 ************************************ 00:05:55.661 16:54:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 16:54:45 event -- common/autotest_common.sh@1142 -- # return 0 00:05:55.661 16:54:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:55.661 16:54:45 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:55.661 16:54:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.661 16:54:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.661 16:54:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 ************************************ 00:05:55.661 START TEST cpu_locks 00:05:55.661 ************************************ 00:05:55.661 16:54:45 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:55.920 * Looking for test storage... 00:05:55.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:55.920 16:54:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:55.920 16:54:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:55.920 16:54:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:55.920 16:54:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:55.920 16:54:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.920 16:54:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.920 16:54:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.920 ************************************ 00:05:55.920 START TEST default_locks 00:05:55.920 ************************************ 00:05:55.920 16:54:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:55.920 16:54:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60464 00:05:55.920 16:54:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60464 00:05:55.920 16:54:46 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60464 ']' 00:05:55.920 16:54:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.920 16:54:46 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.920 16:54:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.920 16:54:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.920 16:54:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.920 16:54:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.920 [2024-07-15 16:54:46.092134] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:55.920 [2024-07-15 16:54:46.092229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60464 ] 00:05:56.177 [2024-07-15 16:54:46.235822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.177 [2024-07-15 16:54:46.383661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.177 [2024-07-15 16:54:46.449598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.110 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.110 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:57.110 16:54:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60464 00:05:57.110 16:54:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60464 00:05:57.110 16:54:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.367 16:54:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60464 00:05:57.367 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60464 ']' 00:05:57.367 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60464 00:05:57.367 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:57.367 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.367 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60464 00:05:57.367 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.367 killing process with pid 60464 00:05:57.367 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.367 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60464' 00:05:57.367 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60464 00:05:57.367 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60464 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60464 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60464 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60464 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60464 ']' 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.625 ERROR: process (pid: 60464) is no longer running 00:05:57.625 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60464) - No such process 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.625 00:05:57.625 real 0m1.815s 00:05:57.625 user 0m1.936s 00:05:57.625 sys 0m0.538s 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.625 16:54:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.625 ************************************ 00:05:57.625 END TEST default_locks 00:05:57.625 ************************************ 00:05:57.625 16:54:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:57.625 16:54:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:57.625 16:54:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.625 16:54:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.625 16:54:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.625 ************************************ 00:05:57.625 START TEST default_locks_via_rpc 00:05:57.625 ************************************ 00:05:57.625 16:54:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:57.625 16:54:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60511 00:05:57.625 16:54:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60511 00:05:57.625 16:54:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.625 16:54:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60511 ']' 00:05:57.625 16:54:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.625 16:54:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.625 16:54:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.625 16:54:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.625 16:54:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.883 [2024-07-15 16:54:47.961898] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:57.884 [2024-07-15 16:54:47.961984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60511 ] 00:05:57.884 [2024-07-15 16:54:48.100891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.142 [2024-07-15 16:54:48.221558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.142 [2024-07-15 16:54:48.280235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60511 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60511 00:05:58.706 16:54:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.270 16:54:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60511 00:05:59.270 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60511 ']' 00:05:59.270 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60511 00:05:59.270 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:59.270 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.270 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60511 00:05:59.270 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.270 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.270 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60511' 00:05:59.270 killing process with pid 60511 00:05:59.270 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60511 00:05:59.270 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60511 00:05:59.528 00:05:59.528 real 0m1.870s 00:05:59.528 user 0m2.029s 00:05:59.528 sys 0m0.559s 00:05:59.529 ************************************ 00:05:59.529 END TEST default_locks_via_rpc 00:05:59.529 ************************************ 00:05:59.529 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.529 16:54:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.529 16:54:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.529 16:54:49 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:59.529 16:54:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.529 16:54:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.529 16:54:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.529 ************************************ 00:05:59.529 START TEST non_locking_app_on_locked_coremask 00:05:59.529 ************************************ 00:05:59.529 16:54:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:59.529 16:54:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60562 00:05:59.529 16:54:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.529 16:54:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60562 /var/tmp/spdk.sock 00:05:59.529 16:54:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60562 ']' 00:05:59.529 16:54:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.529 16:54:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.529 16:54:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.529 16:54:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.529 16:54:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.787 [2024-07-15 16:54:49.886858] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:59.787 [2024-07-15 16:54:49.886953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60562 ] 00:05:59.787 [2024-07-15 16:54:50.021489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.045 [2024-07-15 16:54:50.125351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.045 [2024-07-15 16:54:50.179934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.978 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.978 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:00.978 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60578 00:06:00.978 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:00.978 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60578 /var/tmp/spdk2.sock 00:06:00.978 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60578 ']' 00:06:00.978 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.978 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.978 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.978 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.978 16:54:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.978 [2024-07-15 16:54:50.991842] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:00.978 [2024-07-15 16:54:50.991926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60578 ] 00:06:00.978 [2024-07-15 16:54:51.136953] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.978 [2024-07-15 16:54:51.137008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.236 [2024-07-15 16:54:51.348966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.236 [2024-07-15 16:54:51.457817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.801 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.801 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:01.801 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60562 00:06:01.801 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60562 00:06:01.801 16:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.736 16:54:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60562 00:06:02.736 16:54:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60562 ']' 00:06:02.736 16:54:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60562 00:06:02.736 16:54:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:02.736 16:54:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.736 16:54:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60562 00:06:02.736 16:54:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.736 killing process with pid 60562 00:06:02.736 16:54:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.736 16:54:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60562' 00:06:02.736 16:54:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60562 00:06:02.736 16:54:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60562 00:06:03.302 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60578 00:06:03.302 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60578 ']' 00:06:03.302 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60578 00:06:03.302 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:03.302 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.302 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60578 00:06:03.302 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.302 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.302 killing process with pid 60578 00:06:03.302 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60578' 00:06:03.302 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60578 00:06:03.302 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60578 00:06:03.869 00:06:03.869 real 0m4.117s 00:06:03.869 user 0m4.622s 00:06:03.869 sys 0m1.147s 00:06:03.869 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.869 16:54:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.869 ************************************ 00:06:03.869 END TEST non_locking_app_on_locked_coremask 00:06:03.869 ************************************ 00:06:03.869 16:54:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:03.869 16:54:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:03.869 16:54:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.869 16:54:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.869 16:54:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.869 ************************************ 00:06:03.869 START TEST locking_app_on_unlocked_coremask 00:06:03.869 ************************************ 00:06:03.869 16:54:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:03.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.869 16:54:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60645 00:06:03.869 16:54:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:03.869 16:54:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60645 /var/tmp/spdk.sock 00:06:03.869 16:54:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60645 ']' 00:06:03.869 16:54:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.869 16:54:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.869 16:54:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.869 16:54:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.869 16:54:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.869 [2024-07-15 16:54:54.057968] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:03.869 [2024-07-15 16:54:54.058251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60645 ] 00:06:04.127 [2024-07-15 16:54:54.196759] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.127 [2024-07-15 16:54:54.197076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.127 [2024-07-15 16:54:54.303819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.127 [2024-07-15 16:54:54.358647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.064 16:54:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.064 16:54:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:05.064 16:54:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60661 00:06:05.064 16:54:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.064 16:54:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60661 /var/tmp/spdk2.sock 00:06:05.064 16:54:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60661 ']' 00:06:05.064 16:54:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.064 16:54:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.064 16:54:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.064 16:54:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.064 16:54:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.064 [2024-07-15 16:54:55.062310] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:05.064 [2024-07-15 16:54:55.062645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60661 ] 00:06:05.064 [2024-07-15 16:54:55.208057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.324 [2024-07-15 16:54:55.432158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.324 [2024-07-15 16:54:55.544735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.892 16:54:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.892 16:54:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:05.892 16:54:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60661 00:06:05.892 16:54:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60661 00:06:05.892 16:54:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.860 16:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60645 00:06:06.860 16:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60645 ']' 00:06:06.860 16:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60645 00:06:06.860 16:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:06.860 16:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.861 16:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60645 00:06:06.861 16:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.861 16:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.861 16:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60645' 00:06:06.861 killing process with pid 60645 00:06:06.861 16:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60645 00:06:06.861 16:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60645 00:06:07.429 16:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60661 00:06:07.429 16:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60661 ']' 00:06:07.429 16:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60661 00:06:07.429 16:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:07.429 16:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.429 16:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60661 00:06:07.429 16:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.429 16:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.429 killing process with pid 60661 00:06:07.429 16:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60661' 00:06:07.429 16:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60661 00:06:07.429 16:54:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60661 00:06:07.997 00:06:07.997 real 0m4.049s 00:06:07.997 user 0m4.471s 00:06:07.997 sys 0m1.119s 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.997 ************************************ 00:06:07.997 END TEST locking_app_on_unlocked_coremask 00:06:07.997 ************************************ 00:06:07.997 16:54:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:07.997 16:54:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:07.997 16:54:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.997 16:54:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.997 16:54:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.997 ************************************ 00:06:07.997 START TEST locking_app_on_locked_coremask 00:06:07.997 ************************************ 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60728 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60728 /var/tmp/spdk.sock 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60728 ']' 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.997 16:54:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.997 [2024-07-15 16:54:58.160333] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:07.997 [2024-07-15 16:54:58.160447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60728 ] 00:06:08.255 [2024-07-15 16:54:58.296964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.255 [2024-07-15 16:54:58.413097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.255 [2024-07-15 16:54:58.466640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60744 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60744 /var/tmp/spdk2.sock 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60744 /var/tmp/spdk2.sock 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:09.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60744 /var/tmp/spdk2.sock 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60744 ']' 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.192 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.192 [2024-07-15 16:54:59.190767] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:09.192 [2024-07-15 16:54:59.190860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60744 ] 00:06:09.192 [2024-07-15 16:54:59.335582] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60728 has claimed it. 00:06:09.192 [2024-07-15 16:54:59.335658] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.760 ERROR: process (pid: 60744) is no longer running 00:06:09.760 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60744) - No such process 00:06:09.760 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.760 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:09.760 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:09.760 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.760 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:09.760 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.760 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60728 00:06:09.760 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60728 00:06:09.760 16:54:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.018 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60728 00:06:10.018 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60728 ']' 00:06:10.018 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60728 00:06:10.018 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:10.018 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.018 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60728 00:06:10.018 killing process with pid 60728 00:06:10.018 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.018 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.018 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60728' 00:06:10.018 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60728 00:06:10.018 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60728 00:06:10.587 00:06:10.587 real 0m2.593s 00:06:10.587 user 0m2.996s 00:06:10.587 sys 0m0.605s 00:06:10.587 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.587 16:55:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.587 ************************************ 00:06:10.587 END TEST locking_app_on_locked_coremask 00:06:10.587 ************************************ 00:06:10.587 16:55:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:10.587 16:55:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.587 16:55:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.587 16:55:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.587 16:55:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.587 ************************************ 00:06:10.587 START TEST locking_overlapped_coremask 00:06:10.587 ************************************ 00:06:10.587 16:55:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:10.587 16:55:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60795 00:06:10.587 16:55:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.587 16:55:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60795 /var/tmp/spdk.sock 00:06:10.587 16:55:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60795 ']' 00:06:10.587 16:55:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.587 16:55:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.587 16:55:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.587 16:55:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.587 16:55:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.587 [2024-07-15 16:55:00.803632] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:10.587 [2024-07-15 16:55:00.803744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60795 ] 00:06:10.844 [2024-07-15 16:55:00.940715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.844 [2024-07-15 16:55:01.050626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.844 [2024-07-15 16:55:01.050763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.844 [2024-07-15 16:55:01.050767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.844 [2024-07-15 16:55:01.103241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60813 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60813 /var/tmp/spdk2.sock 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60813 /var/tmp/spdk2.sock 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60813 /var/tmp/spdk2.sock 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60813 ']' 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.786 16:55:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.786 [2024-07-15 16:55:01.847949] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:11.786 [2024-07-15 16:55:01.848044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60813 ] 00:06:11.786 [2024-07-15 16:55:01.990830] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60795 has claimed it. 00:06:11.786 [2024-07-15 16:55:01.990887] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.353 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60813) - No such process 00:06:12.353 ERROR: process (pid: 60813) is no longer running 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60795 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60795 ']' 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60795 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60795 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.353 killing process with pid 60795 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60795' 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60795 00:06:12.353 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60795 00:06:12.921 00:06:12.921 real 0m2.209s 00:06:12.921 user 0m6.140s 00:06:12.921 sys 0m0.431s 00:06:12.921 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.921 16:55:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.921 ************************************ 00:06:12.921 END TEST locking_overlapped_coremask 00:06:12.921 ************************************ 00:06:12.921 16:55:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:12.921 16:55:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.921 16:55:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.921 16:55:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.921 16:55:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.921 ************************************ 00:06:12.921 START TEST locking_overlapped_coremask_via_rpc 00:06:12.921 ************************************ 00:06:12.921 16:55:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:12.921 16:55:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60853 00:06:12.921 16:55:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60853 /var/tmp/spdk.sock 00:06:12.921 16:55:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60853 ']' 00:06:12.921 16:55:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.921 16:55:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.921 16:55:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.921 16:55:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.921 16:55:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.921 16:55:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.921 [2024-07-15 16:55:03.055183] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:12.921 [2024-07-15 16:55:03.055277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60853 ] 00:06:12.921 [2024-07-15 16:55:03.190565] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.921 [2024-07-15 16:55:03.190629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.199 [2024-07-15 16:55:03.292722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.199 [2024-07-15 16:55:03.292825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.199 [2024-07-15 16:55:03.292833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.199 [2024-07-15 16:55:03.346763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.766 16:55:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.766 16:55:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:13.766 16:55:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60871 00:06:13.766 16:55:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60871 /var/tmp/spdk2.sock 00:06:13.767 16:55:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.767 16:55:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60871 ']' 00:06:13.767 16:55:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.767 16:55:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.767 16:55:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.767 16:55:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.767 16:55:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.025 [2024-07-15 16:55:04.069061] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:14.025 [2024-07-15 16:55:04.069156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60871 ] 00:06:14.025 [2024-07-15 16:55:04.212443] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.025 [2024-07-15 16:55:04.212489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.283 [2024-07-15 16:55:04.432043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.283 [2024-07-15 16:55:04.432141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:14.283 [2024-07-15 16:55:04.432144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.283 [2024-07-15 16:55:04.535473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.849 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.849 [2024-07-15 16:55:05.052473] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60853 has claimed it. 00:06:14.849 request: 00:06:14.849 { 00:06:14.849 "method": "framework_enable_cpumask_locks", 00:06:14.849 "req_id": 1 00:06:14.849 } 00:06:14.849 Got JSON-RPC error response 00:06:14.849 response: 00:06:14.849 { 00:06:14.849 "code": -32603, 00:06:14.849 "message": "Failed to claim CPU core: 2" 00:06:14.849 } 00:06:14.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60853 /var/tmp/spdk.sock 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60853 ']' 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.850 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.111 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.111 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.111 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60871 /var/tmp/spdk2.sock 00:06:15.111 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60871 ']' 00:06:15.111 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.111 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.111 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.111 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.111 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.401 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.401 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.401 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:15.401 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.401 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.401 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.401 00:06:15.401 real 0m2.575s 00:06:15.401 user 0m1.307s 00:06:15.401 sys 0m0.181s 00:06:15.401 ************************************ 00:06:15.401 END TEST locking_overlapped_coremask_via_rpc 00:06:15.401 ************************************ 00:06:15.401 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.401 16:55:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.401 16:55:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:15.401 16:55:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:15.401 16:55:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60853 ]] 00:06:15.401 16:55:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60853 00:06:15.401 16:55:05 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60853 ']' 00:06:15.401 16:55:05 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60853 00:06:15.401 16:55:05 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:15.401 16:55:05 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.401 16:55:05 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60853 00:06:15.401 killing process with pid 60853 00:06:15.401 16:55:05 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.401 16:55:05 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.401 16:55:05 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60853' 00:06:15.401 16:55:05 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60853 00:06:15.401 16:55:05 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60853 00:06:15.969 16:55:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60871 ]] 00:06:15.969 16:55:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60871 00:06:15.969 16:55:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60871 ']' 00:06:15.969 16:55:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60871 00:06:15.969 16:55:06 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:15.969 16:55:06 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.969 16:55:06 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60871 00:06:15.969 killing process with pid 60871 00:06:15.969 16:55:06 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:15.969 16:55:06 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:15.969 16:55:06 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60871' 00:06:15.969 16:55:06 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60871 00:06:15.969 16:55:06 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60871 00:06:16.227 16:55:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.227 16:55:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:16.227 Process with pid 60853 is not found 00:06:16.227 Process with pid 60871 is not found 00:06:16.228 16:55:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60853 ]] 00:06:16.228 16:55:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60853 00:06:16.228 16:55:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60853 ']' 00:06:16.228 16:55:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60853 00:06:16.228 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60853) - No such process 00:06:16.228 16:55:06 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60853 is not found' 00:06:16.228 16:55:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60871 ]] 00:06:16.228 16:55:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60871 00:06:16.228 16:55:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60871 ']' 00:06:16.228 16:55:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60871 00:06:16.228 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60871) - No such process 00:06:16.228 16:55:06 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60871 is not found' 00:06:16.228 16:55:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.228 00:06:16.228 real 0m20.509s 00:06:16.228 user 0m35.619s 00:06:16.228 sys 0m5.408s 00:06:16.228 16:55:06 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.228 16:55:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.228 ************************************ 00:06:16.228 END TEST cpu_locks 00:06:16.228 ************************************ 00:06:16.228 16:55:06 event -- common/autotest_common.sh@1142 -- # return 0 00:06:16.228 ************************************ 00:06:16.228 END TEST event 00:06:16.228 ************************************ 00:06:16.228 00:06:16.228 real 0m48.512s 00:06:16.228 user 1m33.802s 00:06:16.228 sys 0m8.956s 00:06:16.228 16:55:06 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.228 16:55:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.228 16:55:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.228 16:55:06 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.228 16:55:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.228 16:55:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.228 16:55:06 -- common/autotest_common.sh@10 -- # set +x 00:06:16.228 ************************************ 00:06:16.228 START TEST thread 00:06:16.228 ************************************ 00:06:16.228 16:55:06 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.486 * Looking for test storage... 00:06:16.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:16.486 16:55:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.486 16:55:06 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:16.486 16:55:06 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.486 16:55:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.486 ************************************ 00:06:16.486 START TEST thread_poller_perf 00:06:16.486 ************************************ 00:06:16.486 16:55:06 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.486 [2024-07-15 16:55:06.630566] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:16.486 [2024-07-15 16:55:06.630656] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60994 ] 00:06:16.486 [2024-07-15 16:55:06.771750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.745 [2024-07-15 16:55:06.910276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.745 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:18.122 ====================================== 00:06:18.122 busy:2209459071 (cyc) 00:06:18.122 total_run_count: 285000 00:06:18.122 tsc_hz: 2200000000 (cyc) 00:06:18.122 ====================================== 00:06:18.122 poller_cost: 7752 (cyc), 3523 (nsec) 00:06:18.122 00:06:18.122 real 0m1.400s 00:06:18.122 user 0m1.228s 00:06:18.122 sys 0m0.063s 00:06:18.122 16:55:08 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.122 ************************************ 00:06:18.122 END TEST thread_poller_perf 00:06:18.122 ************************************ 00:06:18.122 16:55:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.122 16:55:08 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:18.122 16:55:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:18.122 16:55:08 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:18.122 16:55:08 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.122 16:55:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.122 ************************************ 00:06:18.122 START TEST thread_poller_perf 00:06:18.122 ************************************ 00:06:18.122 16:55:08 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:18.122 [2024-07-15 16:55:08.085866] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:18.122 [2024-07-15 16:55:08.085957] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61029 ] 00:06:18.122 [2024-07-15 16:55:08.225301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.122 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:18.122 [2024-07-15 16:55:08.337099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.501 ====================================== 00:06:19.501 busy:2201889032 (cyc) 00:06:19.501 total_run_count: 4201000 00:06:19.501 tsc_hz: 2200000000 (cyc) 00:06:19.501 ====================================== 00:06:19.501 poller_cost: 524 (cyc), 238 (nsec) 00:06:19.501 ************************************ 00:06:19.501 END TEST thread_poller_perf 00:06:19.501 ************************************ 00:06:19.501 00:06:19.501 real 0m1.356s 00:06:19.501 user 0m1.192s 00:06:19.501 sys 0m0.057s 00:06:19.501 16:55:09 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.501 16:55:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.501 16:55:09 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:19.501 16:55:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:19.501 00:06:19.501 real 0m2.940s 00:06:19.501 user 0m2.478s 00:06:19.501 sys 0m0.238s 00:06:19.501 16:55:09 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.501 ************************************ 00:06:19.501 END TEST thread 00:06:19.501 ************************************ 00:06:19.501 16:55:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.501 16:55:09 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.501 16:55:09 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:19.501 16:55:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.501 16:55:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.501 16:55:09 -- common/autotest_common.sh@10 -- # set +x 00:06:19.501 ************************************ 00:06:19.501 START TEST accel 00:06:19.501 ************************************ 00:06:19.501 16:55:09 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:19.501 * Looking for test storage... 00:06:19.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:19.501 16:55:09 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:19.501 16:55:09 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:19.501 16:55:09 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:19.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.501 16:55:09 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61098 00:06:19.501 16:55:09 accel -- accel/accel.sh@63 -- # waitforlisten 61098 00:06:19.501 16:55:09 accel -- common/autotest_common.sh@829 -- # '[' -z 61098 ']' 00:06:19.501 16:55:09 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.501 16:55:09 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.501 16:55:09 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:19.501 16:55:09 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.501 16:55:09 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:19.501 16:55:09 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.501 16:55:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.501 16:55:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.501 16:55:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.501 16:55:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.501 16:55:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.501 16:55:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.501 16:55:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:19.501 16:55:09 accel -- accel/accel.sh@41 -- # jq -r . 00:06:19.501 [2024-07-15 16:55:09.663959] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:19.501 [2024-07-15 16:55:09.664243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61098 ] 00:06:19.760 [2024-07-15 16:55:09.799671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.760 [2024-07-15 16:55:09.919897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.760 [2024-07-15 16:55:09.974392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.411 16:55:10 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.411 16:55:10 accel -- common/autotest_common.sh@862 -- # return 0 00:06:20.411 16:55:10 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:20.411 16:55:10 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:20.411 16:55:10 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:20.411 16:55:10 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:20.411 16:55:10 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:20.411 16:55:10 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:20.411 16:55:10 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.411 16:55:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.411 16:55:10 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:20.411 16:55:10 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.411 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.411 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.411 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.411 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.411 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.411 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.411 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.411 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.411 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.411 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.411 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.411 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.411 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.411 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.411 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.411 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.411 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.411 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.411 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.411 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.411 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.668 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.668 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.668 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.668 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.668 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.668 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.668 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.668 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.668 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.668 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.668 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.668 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.668 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.668 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.668 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.668 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.668 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.668 16:55:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.668 16:55:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.668 16:55:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.668 16:55:10 accel -- accel/accel.sh@75 -- # killprocess 61098 00:06:20.668 16:55:10 accel -- common/autotest_common.sh@948 -- # '[' -z 61098 ']' 00:06:20.668 16:55:10 accel -- common/autotest_common.sh@952 -- # kill -0 61098 00:06:20.668 16:55:10 accel -- common/autotest_common.sh@953 -- # uname 00:06:20.668 16:55:10 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.668 16:55:10 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61098 00:06:20.668 killing process with pid 61098 00:06:20.668 16:55:10 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.668 16:55:10 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.668 16:55:10 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61098' 00:06:20.668 16:55:10 accel -- common/autotest_common.sh@967 -- # kill 61098 00:06:20.668 16:55:10 accel -- common/autotest_common.sh@972 -- # wait 61098 00:06:20.926 16:55:11 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:20.926 16:55:11 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:20.926 16:55:11 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:20.926 16:55:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.926 16:55:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.926 16:55:11 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:20.926 16:55:11 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:20.926 16:55:11 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:20.926 16:55:11 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.926 16:55:11 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.926 16:55:11 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.926 16:55:11 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.926 16:55:11 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.926 16:55:11 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:20.926 16:55:11 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:20.926 16:55:11 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.926 16:55:11 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:20.926 16:55:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.926 16:55:11 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:20.926 16:55:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:20.926 16:55:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.926 16:55:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.926 ************************************ 00:06:20.926 START TEST accel_missing_filename 00:06:20.926 ************************************ 00:06:20.926 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:20.926 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:20.926 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:20.926 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:20.926 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.926 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:20.926 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.926 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:20.926 16:55:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:20.926 16:55:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:20.926 16:55:11 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.926 16:55:11 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.926 16:55:11 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.926 16:55:11 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.926 16:55:11 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.926 16:55:11 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:20.926 16:55:11 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:21.183 [2024-07-15 16:55:11.234331] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:21.183 [2024-07-15 16:55:11.234415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61155 ] 00:06:21.183 [2024-07-15 16:55:11.369801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.440 [2024-07-15 16:55:11.482395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.440 [2024-07-15 16:55:11.538135] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.440 [2024-07-15 16:55:11.614482] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:21.440 A filename is required. 00:06:21.440 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:21.440 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.440 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:21.440 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:21.440 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:21.440 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.440 00:06:21.440 real 0m0.486s 00:06:21.440 user 0m0.317s 00:06:21.440 sys 0m0.109s 00:06:21.440 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.440 ************************************ 00:06:21.440 END TEST accel_missing_filename 00:06:21.440 ************************************ 00:06:21.440 16:55:11 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:21.854 16:55:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.854 16:55:11 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.854 16:55:11 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:21.854 16:55:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.854 16:55:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.854 ************************************ 00:06:21.854 START TEST accel_compress_verify 00:06:21.854 ************************************ 00:06:21.854 16:55:11 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.854 16:55:11 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:21.854 16:55:11 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.854 16:55:11 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:21.854 16:55:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.854 16:55:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:21.854 16:55:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.854 16:55:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.854 16:55:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.854 16:55:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:21.854 16:55:11 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.854 16:55:11 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.854 16:55:11 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.854 16:55:11 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.854 16:55:11 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.854 16:55:11 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:21.854 16:55:11 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:21.854 [2024-07-15 16:55:11.776160] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:21.854 [2024-07-15 16:55:11.776748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61180 ] 00:06:21.854 [2024-07-15 16:55:11.911051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.854 [2024-07-15 16:55:12.008703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.854 [2024-07-15 16:55:12.063084] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.854 [2024-07-15 16:55:12.138258] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:22.111 00:06:22.111 Compression does not support the verify option, aborting. 00:06:22.111 16:55:12 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:22.111 16:55:12 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.111 16:55:12 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:22.111 ************************************ 00:06:22.111 END TEST accel_compress_verify 00:06:22.111 ************************************ 00:06:22.111 16:55:12 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:22.111 16:55:12 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:22.111 16:55:12 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.111 00:06:22.111 real 0m0.477s 00:06:22.111 user 0m0.310s 00:06:22.111 sys 0m0.105s 00:06:22.111 16:55:12 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.111 16:55:12 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:22.111 16:55:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.111 16:55:12 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:22.111 16:55:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:22.111 16:55:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.111 16:55:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.111 ************************************ 00:06:22.111 START TEST accel_wrong_workload 00:06:22.111 ************************************ 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:22.111 16:55:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:22.111 16:55:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:22.111 16:55:12 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.111 16:55:12 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.111 16:55:12 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.111 16:55:12 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.111 16:55:12 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.111 16:55:12 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:22.111 16:55:12 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:22.111 Unsupported workload type: foobar 00:06:22.111 [2024-07-15 16:55:12.292158] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:22.111 accel_perf options: 00:06:22.111 [-h help message] 00:06:22.111 [-q queue depth per core] 00:06:22.111 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:22.111 [-T number of threads per core 00:06:22.111 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:22.111 [-t time in seconds] 00:06:22.111 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:22.111 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:22.111 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:22.111 [-l for compress/decompress workloads, name of uncompressed input file 00:06:22.111 [-S for crc32c workload, use this seed value (default 0) 00:06:22.111 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:22.111 [-f for fill workload, use this BYTE value (default 255) 00:06:22.111 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:22.111 [-y verify result if this switch is on] 00:06:22.111 [-a tasks to allocate per core (default: same value as -q)] 00:06:22.111 Can be used to spread operations across a wider range of memory. 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.111 00:06:22.111 real 0m0.027s 00:06:22.111 user 0m0.018s 00:06:22.111 sys 0m0.009s 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.111 16:55:12 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:22.111 ************************************ 00:06:22.111 END TEST accel_wrong_workload 00:06:22.111 ************************************ 00:06:22.111 16:55:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.111 16:55:12 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:22.112 16:55:12 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:22.112 16:55:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.112 16:55:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.112 ************************************ 00:06:22.112 START TEST accel_negative_buffers 00:06:22.112 ************************************ 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:22.112 16:55:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:22.112 16:55:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:22.112 16:55:12 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.112 16:55:12 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.112 16:55:12 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.112 16:55:12 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.112 16:55:12 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.112 16:55:12 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:22.112 16:55:12 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:22.112 -x option must be non-negative. 00:06:22.112 [2024-07-15 16:55:12.366265] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:22.112 accel_perf options: 00:06:22.112 [-h help message] 00:06:22.112 [-q queue depth per core] 00:06:22.112 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:22.112 [-T number of threads per core 00:06:22.112 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:22.112 [-t time in seconds] 00:06:22.112 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:22.112 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:22.112 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:22.112 [-l for compress/decompress workloads, name of uncompressed input file 00:06:22.112 [-S for crc32c workload, use this seed value (default 0) 00:06:22.112 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:22.112 [-f for fill workload, use this BYTE value (default 255) 00:06:22.112 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:22.112 [-y verify result if this switch is on] 00:06:22.112 [-a tasks to allocate per core (default: same value as -q)] 00:06:22.112 Can be used to spread operations across a wider range of memory. 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.112 00:06:22.112 real 0m0.031s 00:06:22.112 user 0m0.019s 00:06:22.112 sys 0m0.011s 00:06:22.112 ************************************ 00:06:22.112 END TEST accel_negative_buffers 00:06:22.112 ************************************ 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.112 16:55:12 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:22.112 16:55:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.112 16:55:12 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:22.369 16:55:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:22.369 16:55:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.369 16:55:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.369 ************************************ 00:06:22.369 START TEST accel_crc32c 00:06:22.369 ************************************ 00:06:22.369 16:55:12 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:22.369 16:55:12 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:22.369 [2024-07-15 16:55:12.449998] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:22.369 [2024-07-15 16:55:12.450232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61238 ] 00:06:22.369 [2024-07-15 16:55:12.589433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.627 [2024-07-15 16:55:12.700440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.627 16:55:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:24.001 16:55:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.001 00:06:24.001 real 0m1.504s 00:06:24.001 user 0m1.295s 00:06:24.001 sys 0m0.114s 00:06:24.001 ************************************ 00:06:24.001 END TEST accel_crc32c 00:06:24.001 ************************************ 00:06:24.001 16:55:13 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.001 16:55:13 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:24.001 16:55:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.001 16:55:13 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:24.001 16:55:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:24.001 16:55:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.001 16:55:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.001 ************************************ 00:06:24.001 START TEST accel_crc32c_C2 00:06:24.001 ************************************ 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.001 16:55:13 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:24.001 [2024-07-15 16:55:13.994186] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:24.001 [2024-07-15 16:55:13.994287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61273 ] 00:06:24.001 [2024-07-15 16:55:14.128724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.001 [2024-07-15 16:55:14.244063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.260 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.260 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.260 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.260 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.260 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.260 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.260 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.260 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.260 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:24.260 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.261 16:55:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.197 00:06:25.197 real 0m1.488s 00:06:25.197 user 0m1.286s 00:06:25.197 sys 0m0.106s 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.197 ************************************ 00:06:25.197 END TEST accel_crc32c_C2 00:06:25.197 ************************************ 00:06:25.197 16:55:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:25.457 16:55:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.457 16:55:15 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:25.457 16:55:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:25.457 16:55:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.457 16:55:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.457 ************************************ 00:06:25.457 START TEST accel_copy 00:06:25.457 ************************************ 00:06:25.457 16:55:15 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:25.457 16:55:15 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:25.457 [2024-07-15 16:55:15.529108] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:25.457 [2024-07-15 16:55:15.529193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61307 ] 00:06:25.457 [2024-07-15 16:55:15.660162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.716 [2024-07-15 16:55:15.759813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.716 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.717 16:55:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:27.092 16:55:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.092 00:06:27.092 real 0m1.476s 00:06:27.092 user 0m1.271s 00:06:27.092 sys 0m0.112s 00:06:27.092 16:55:16 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.092 16:55:16 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:27.092 ************************************ 00:06:27.092 END TEST accel_copy 00:06:27.092 ************************************ 00:06:27.092 16:55:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.092 16:55:17 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:27.092 16:55:17 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:27.092 16:55:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.092 16:55:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.092 ************************************ 00:06:27.092 START TEST accel_fill 00:06:27.092 ************************************ 00:06:27.092 16:55:17 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:27.092 16:55:17 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:27.092 16:55:17 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:27.092 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.092 16:55:17 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:27.092 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.092 16:55:17 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:27.092 16:55:17 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:27.092 16:55:17 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:27.093 [2024-07-15 16:55:17.055290] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:27.093 [2024-07-15 16:55:17.055418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61342 ] 00:06:27.093 [2024-07-15 16:55:17.195773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.093 [2024-07-15 16:55:17.320344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.093 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.352 16:55:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:28.288 16:55:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.288 00:06:28.288 real 0m1.512s 00:06:28.288 user 0m1.306s 00:06:28.288 sys 0m0.116s 00:06:28.288 16:55:18 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.288 ************************************ 00:06:28.288 END TEST accel_fill 00:06:28.288 ************************************ 00:06:28.288 16:55:18 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:28.288 16:55:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.288 16:55:18 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:28.547 16:55:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:28.548 16:55:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.548 16:55:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.548 ************************************ 00:06:28.548 START TEST accel_copy_crc32c 00:06:28.548 ************************************ 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:28.548 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:28.548 [2024-07-15 16:55:18.618277] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:28.548 [2024-07-15 16:55:18.618382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61376 ] 00:06:28.548 [2024-07-15 16:55:18.756282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.807 [2024-07-15 16:55:18.866979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.807 16:55:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.185 00:06:30.185 real 0m1.491s 00:06:30.185 user 0m1.278s 00:06:30.185 sys 0m0.118s 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.185 16:55:20 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:30.185 ************************************ 00:06:30.185 END TEST accel_copy_crc32c 00:06:30.185 ************************************ 00:06:30.185 16:55:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.185 16:55:20 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:30.185 16:55:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:30.185 16:55:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.185 16:55:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.185 ************************************ 00:06:30.185 START TEST accel_copy_crc32c_C2 00:06:30.185 ************************************ 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:30.185 [2024-07-15 16:55:20.158217] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:30.185 [2024-07-15 16:55:20.158325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61411 ] 00:06:30.185 [2024-07-15 16:55:20.292305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.185 [2024-07-15 16:55:20.404346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:30.185 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.186 16:55:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.561 00:06:31.561 real 0m1.491s 00:06:31.561 user 0m1.287s 00:06:31.561 sys 0m0.110s 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.561 16:55:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:31.561 ************************************ 00:06:31.561 END TEST accel_copy_crc32c_C2 00:06:31.561 ************************************ 00:06:31.561 16:55:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.561 16:55:21 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:31.561 16:55:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:31.561 16:55:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.561 16:55:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.561 ************************************ 00:06:31.561 START TEST accel_dualcast 00:06:31.561 ************************************ 00:06:31.561 16:55:21 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:31.561 16:55:21 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:31.561 [2024-07-15 16:55:21.691313] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:31.562 [2024-07-15 16:55:21.691419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61445 ] 00:06:31.562 [2024-07-15 16:55:21.823992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.820 [2024-07-15 16:55:21.937459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.820 16:55:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:33.193 16:55:23 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.193 00:06:33.193 real 0m1.484s 00:06:33.193 user 0m0.014s 00:06:33.193 sys 0m0.000s 00:06:33.193 16:55:23 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.193 ************************************ 00:06:33.193 END TEST accel_dualcast 00:06:33.193 ************************************ 00:06:33.193 16:55:23 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:33.193 16:55:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.193 16:55:23 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:33.193 16:55:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:33.193 16:55:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.193 16:55:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.193 ************************************ 00:06:33.193 START TEST accel_compare 00:06:33.193 ************************************ 00:06:33.193 16:55:23 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:33.193 16:55:23 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:33.193 [2024-07-15 16:55:23.230887] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:33.193 [2024-07-15 16:55:23.230983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61480 ] 00:06:33.193 [2024-07-15 16:55:23.369735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.193 [2024-07-15 16:55:23.483756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.452 16:55:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:34.826 16:55:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.826 00:06:34.826 real 0m1.497s 00:06:34.826 user 0m1.286s 00:06:34.826 sys 0m0.115s 00:06:34.826 16:55:24 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.826 16:55:24 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:34.826 ************************************ 00:06:34.826 END TEST accel_compare 00:06:34.826 ************************************ 00:06:34.826 16:55:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.826 16:55:24 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:34.826 16:55:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:34.826 16:55:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.826 16:55:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.826 ************************************ 00:06:34.826 START TEST accel_xor 00:06:34.826 ************************************ 00:06:34.826 16:55:24 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:34.826 16:55:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:34.826 [2024-07-15 16:55:24.781863] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:34.826 [2024-07-15 16:55:24.781984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61514 ] 00:06:34.826 [2024-07-15 16:55:24.924733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.826 [2024-07-15 16:55:25.038544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.826 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.827 16:55:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.200 00:06:36.200 real 0m1.502s 00:06:36.200 user 0m0.012s 00:06:36.200 sys 0m0.003s 00:06:36.200 16:55:26 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.200 ************************************ 00:06:36.200 END TEST accel_xor 00:06:36.200 ************************************ 00:06:36.200 16:55:26 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:36.200 16:55:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.200 16:55:26 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:36.200 16:55:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:36.200 16:55:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.200 16:55:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.200 ************************************ 00:06:36.200 START TEST accel_xor 00:06:36.200 ************************************ 00:06:36.200 16:55:26 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:36.200 16:55:26 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:36.201 16:55:26 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.201 16:55:26 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.201 16:55:26 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.201 16:55:26 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.201 16:55:26 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.201 16:55:26 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:36.201 16:55:26 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:36.201 [2024-07-15 16:55:26.325198] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:36.201 [2024-07-15 16:55:26.325320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61549 ] 00:06:36.201 [2024-07-15 16:55:26.469006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.458 [2024-07-15 16:55:26.581668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.459 16:55:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:37.833 16:55:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.833 00:06:37.833 real 0m1.495s 00:06:37.833 user 0m0.011s 00:06:37.833 sys 0m0.003s 00:06:37.833 ************************************ 00:06:37.833 END TEST accel_xor 00:06:37.833 ************************************ 00:06:37.833 16:55:27 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.833 16:55:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:37.833 16:55:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.833 16:55:27 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:37.833 16:55:27 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:37.833 16:55:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.833 16:55:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.833 ************************************ 00:06:37.833 START TEST accel_dif_verify 00:06:37.833 ************************************ 00:06:37.833 16:55:27 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:37.833 16:55:27 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:37.833 [2024-07-15 16:55:27.870071] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:37.833 [2024-07-15 16:55:27.870178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61582 ] 00:06:37.833 [2024-07-15 16:55:28.001436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.833 [2024-07-15 16:55:28.113334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 16:55:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:39.463 16:55:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.463 00:06:39.463 real 0m1.492s 00:06:39.463 user 0m1.296s 00:06:39.463 sys 0m0.104s 00:06:39.464 16:55:29 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.464 16:55:29 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:39.464 ************************************ 00:06:39.464 END TEST accel_dif_verify 00:06:39.464 ************************************ 00:06:39.464 16:55:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.464 16:55:29 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:39.464 16:55:29 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:39.464 16:55:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.464 16:55:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.464 ************************************ 00:06:39.464 START TEST accel_dif_generate 00:06:39.464 ************************************ 00:06:39.464 16:55:29 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:39.464 [2024-07-15 16:55:29.402112] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:39.464 [2024-07-15 16:55:29.402809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61618 ] 00:06:39.464 [2024-07-15 16:55:29.540847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.464 [2024-07-15 16:55:29.668670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.464 16:55:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.840 ************************************ 00:06:40.840 END TEST accel_dif_generate 00:06:40.840 ************************************ 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:40.840 16:55:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.840 00:06:40.840 real 0m1.520s 00:06:40.840 user 0m1.320s 00:06:40.840 sys 0m0.109s 00:06:40.840 16:55:30 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.840 16:55:30 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:40.840 16:55:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.840 16:55:30 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:40.840 16:55:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:40.840 16:55:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.840 16:55:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.840 ************************************ 00:06:40.840 START TEST accel_dif_generate_copy 00:06:40.840 ************************************ 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:40.840 16:55:30 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:40.840 [2024-07-15 16:55:30.971460] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:40.840 [2024-07-15 16:55:30.972196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61647 ] 00:06:40.840 [2024-07-15 16:55:31.113305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.100 [2024-07-15 16:55:31.228747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.100 16:55:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.498 00:06:42.498 real 0m1.501s 00:06:42.498 user 0m1.292s 00:06:42.498 sys 0m0.114s 00:06:42.498 ************************************ 00:06:42.498 END TEST accel_dif_generate_copy 00:06:42.498 ************************************ 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.498 16:55:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:42.498 16:55:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.498 16:55:32 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:42.498 16:55:32 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.498 16:55:32 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:42.498 16:55:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.498 16:55:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.498 ************************************ 00:06:42.498 START TEST accel_comp 00:06:42.498 ************************************ 00:06:42.498 16:55:32 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.498 16:55:32 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:42.498 16:55:32 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:42.498 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.498 16:55:32 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.498 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.498 16:55:32 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.498 16:55:32 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:42.499 16:55:32 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.499 16:55:32 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.499 16:55:32 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.499 16:55:32 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.499 16:55:32 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.499 16:55:32 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:42.499 16:55:32 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:42.499 [2024-07-15 16:55:32.517463] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:42.499 [2024-07-15 16:55:32.517560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61687 ] 00:06:42.499 [2024-07-15 16:55:32.657011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.499 [2024-07-15 16:55:32.770675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.758 16:55:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:43.697 16:55:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.697 00:06:43.697 real 0m1.501s 00:06:43.697 user 0m1.302s 00:06:43.697 sys 0m0.102s 00:06:43.697 16:55:33 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.697 16:55:33 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:43.697 ************************************ 00:06:43.697 END TEST accel_comp 00:06:43.697 ************************************ 00:06:43.956 16:55:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.956 16:55:34 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:43.956 16:55:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:43.956 16:55:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.956 16:55:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.956 ************************************ 00:06:43.956 START TEST accel_decomp 00:06:43.956 ************************************ 00:06:43.956 16:55:34 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:43.956 16:55:34 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:43.956 [2024-07-15 16:55:34.055540] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:43.956 [2024-07-15 16:55:34.055653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61716 ] 00:06:43.956 [2024-07-15 16:55:34.192050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.215 [2024-07-15 16:55:34.318716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.215 16:55:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.589 ************************************ 00:06:45.589 END TEST accel_decomp 00:06:45.589 ************************************ 00:06:45.589 16:55:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.589 00:06:45.589 real 0m1.513s 00:06:45.589 user 0m1.302s 00:06:45.589 sys 0m0.115s 00:06:45.589 16:55:35 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.589 16:55:35 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:45.589 16:55:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.589 16:55:35 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:45.589 16:55:35 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:45.589 16:55:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.589 16:55:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.589 ************************************ 00:06:45.589 START TEST accel_decomp_full 00:06:45.589 ************************************ 00:06:45.589 16:55:35 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:45.589 16:55:35 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:45.589 16:55:35 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:45.589 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 16:55:35 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:45.589 16:55:35 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:45.589 16:55:35 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:45.589 16:55:35 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.589 16:55:35 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.589 16:55:35 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.590 16:55:35 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.590 16:55:35 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.590 16:55:35 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:45.590 16:55:35 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:45.590 [2024-07-15 16:55:35.610492] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:45.590 [2024-07-15 16:55:35.610590] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61756 ] 00:06:45.590 [2024-07-15 16:55:35.745097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.590 [2024-07-15 16:55:35.858927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.848 16:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.223 16:55:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.223 ************************************ 00:06:47.223 END TEST accel_decomp_full 00:06:47.224 ************************************ 00:06:47.224 16:55:37 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.224 16:55:37 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.224 16:55:37 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.224 00:06:47.224 real 0m1.500s 00:06:47.224 user 0m1.299s 00:06:47.224 sys 0m0.107s 00:06:47.224 16:55:37 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.224 16:55:37 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:47.224 16:55:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.224 16:55:37 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.224 16:55:37 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:47.224 16:55:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.224 16:55:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.224 ************************************ 00:06:47.224 START TEST accel_decomp_mcore 00:06:47.224 ************************************ 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:47.224 [2024-07-15 16:55:37.158644] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:47.224 [2024-07-15 16:55:37.158883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61785 ] 00:06:47.224 [2024-07-15 16:55:37.301273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.224 [2024-07-15 16:55:37.429021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.224 [2024-07-15 16:55:37.429147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.224 [2024-07-15 16:55:37.429273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.224 [2024-07-15 16:55:37.429427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.224 16:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.598 ************************************ 00:06:48.598 END TEST accel_decomp_mcore 00:06:48.598 ************************************ 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.598 00:06:48.598 real 0m1.548s 00:06:48.598 user 0m4.736s 00:06:48.598 sys 0m0.134s 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.598 16:55:38 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:48.598 16:55:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.598 16:55:38 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:48.598 16:55:38 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:48.598 16:55:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.598 16:55:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.598 ************************************ 00:06:48.598 START TEST accel_decomp_full_mcore 00:06:48.598 ************************************ 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:48.598 16:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:48.598 [2024-07-15 16:55:38.750327] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:48.598 [2024-07-15 16:55:38.750425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61828 ] 00:06:48.598 [2024-07-15 16:55:38.881507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.856 [2024-07-15 16:55:38.994758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.856 [2024-07-15 16:55:38.994853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.856 [2024-07-15 16:55:38.994970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.856 [2024-07-15 16:55:38.995083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.856 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.857 16:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.273 ************************************ 00:06:50.273 END TEST accel_decomp_full_mcore 00:06:50.273 ************************************ 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.273 00:06:50.273 real 0m1.507s 00:06:50.273 user 0m4.705s 00:06:50.273 sys 0m0.125s 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.273 16:55:40 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:50.273 16:55:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.273 16:55:40 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:50.273 16:55:40 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:50.273 16:55:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.273 16:55:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.273 ************************************ 00:06:50.273 START TEST accel_decomp_mthread 00:06:50.273 ************************************ 00:06:50.273 16:55:40 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:50.273 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:50.273 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:50.273 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.274 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.274 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:50.274 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:50.274 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:50.274 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.274 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.274 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.274 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.274 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.274 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:50.274 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:50.274 [2024-07-15 16:55:40.304307] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:50.274 [2024-07-15 16:55:40.304502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61866 ] 00:06:50.274 [2024-07-15 16:55:40.440853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.274 [2024-07-15 16:55:40.550290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.532 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.533 16:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.908 00:06:51.908 real 0m1.501s 00:06:51.908 user 0m1.296s 00:06:51.908 sys 0m0.112s 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.908 ************************************ 00:06:51.908 END TEST accel_decomp_mthread 00:06:51.908 ************************************ 00:06:51.908 16:55:41 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:51.908 16:55:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.908 16:55:41 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:51.908 16:55:41 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:51.908 16:55:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.908 16:55:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.908 ************************************ 00:06:51.908 START TEST accel_decomp_full_mthread 00:06:51.908 ************************************ 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:51.908 16:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:51.908 [2024-07-15 16:55:41.855450] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:51.909 [2024-07-15 16:55:41.855546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61900 ] 00:06:51.909 [2024-07-15 16:55:41.994406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.909 [2024-07-15 16:55:42.107286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.909 16:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.284 00:06:53.284 real 0m1.532s 00:06:53.284 user 0m1.317s 00:06:53.284 sys 0m0.122s 00:06:53.284 16:55:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.284 ************************************ 00:06:53.284 END TEST accel_decomp_full_mthread 00:06:53.285 ************************************ 00:06:53.285 16:55:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:53.285 16:55:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.285 16:55:43 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:53.285 16:55:43 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:53.285 16:55:43 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:53.285 16:55:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.285 16:55:43 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:53.285 16:55:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.285 16:55:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.285 16:55:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.285 16:55:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.285 16:55:43 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.285 16:55:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.285 16:55:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:53.285 16:55:43 accel -- accel/accel.sh@41 -- # jq -r . 00:06:53.285 ************************************ 00:06:53.285 START TEST accel_dif_functional_tests 00:06:53.285 ************************************ 00:06:53.285 16:55:43 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:53.285 [2024-07-15 16:55:43.465039] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:53.285 [2024-07-15 16:55:43.465139] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61938 ] 00:06:53.543 [2024-07-15 16:55:43.604400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.543 [2024-07-15 16:55:43.722741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.543 [2024-07-15 16:55:43.722831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.543 [2024-07-15 16:55:43.722837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.543 [2024-07-15 16:55:43.776995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.543 00:06:53.543 00:06:53.543 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.543 http://cunit.sourceforge.net/ 00:06:53.543 00:06:53.543 00:06:53.543 Suite: accel_dif 00:06:53.543 Test: verify: DIF generated, GUARD check ...passed 00:06:53.543 Test: verify: DIF generated, APPTAG check ...passed 00:06:53.543 Test: verify: DIF generated, REFTAG check ...passed 00:06:53.543 Test: verify: DIF not generated, GUARD check ...passed 00:06:53.543 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 16:55:43.814571] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:53.543 [2024-07-15 16:55:43.814658] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:53.543 passed 00:06:53.543 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 16:55:43.814776] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:53.543 passed 00:06:53.543 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:53.543 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 16:55:43.814885] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:53.543 passed 00:06:53.543 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:53.543 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:53.543 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:53.543 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 16:55:43.815248] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:53.543 passed 00:06:53.543 Test: verify copy: DIF generated, GUARD check ...passed 00:06:53.543 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:53.543 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:53.543 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 16:55:43.815458] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:53.543 passed 00:06:53.543 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 16:55:43.815631] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:53.543 passed 00:06:53.543 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 16:55:43.815768] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:53.543 passed 00:06:53.543 Test: generate copy: DIF generated, GUARD check ...passed 00:06:53.543 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:53.543 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:53.543 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:53.543 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:53.543 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:53.543 Test: generate copy: iovecs-len validate ...passed 00:06:53.543 Test: generate copy: buffer alignment validate ...passed[2024-07-15 16:55:43.816424] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:53.543 00:06:53.543 00:06:53.543 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.543 suites 1 1 n/a 0 0 00:06:53.543 tests 26 26 26 0 0 00:06:53.543 asserts 115 115 115 0 n/a 00:06:53.543 00:06:53.543 Elapsed time = 0.006 seconds 00:06:53.801 ************************************ 00:06:53.801 END TEST accel_dif_functional_tests 00:06:53.801 ************************************ 00:06:53.801 00:06:53.801 real 0m0.615s 00:06:53.801 user 0m0.818s 00:06:53.801 sys 0m0.147s 00:06:53.801 16:55:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.801 16:55:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:53.801 16:55:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.801 ************************************ 00:06:53.801 END TEST accel 00:06:53.801 ************************************ 00:06:53.801 00:06:53.801 real 0m34.551s 00:06:53.801 user 0m36.476s 00:06:53.801 sys 0m3.797s 00:06:53.801 16:55:44 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.801 16:55:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.801 16:55:44 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.801 16:55:44 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:53.801 16:55:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.801 16:55:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.801 16:55:44 -- common/autotest_common.sh@10 -- # set +x 00:06:54.060 ************************************ 00:06:54.060 START TEST accel_rpc 00:06:54.060 ************************************ 00:06:54.060 16:55:44 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:54.060 * Looking for test storage... 00:06:54.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:54.060 16:55:44 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:54.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.060 16:55:44 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62002 00:06:54.060 16:55:44 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62002 00:06:54.060 16:55:44 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62002 ']' 00:06:54.060 16:55:44 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:54.060 16:55:44 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.060 16:55:44 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.060 16:55:44 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.060 16:55:44 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.060 16:55:44 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.060 [2024-07-15 16:55:44.284532] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:54.060 [2024-07-15 16:55:44.284678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62002 ] 00:06:54.318 [2024-07-15 16:55:44.429439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.318 [2024-07-15 16:55:44.569465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.252 16:55:45 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.252 16:55:45 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:55.252 16:55:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:55.252 16:55:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:55.252 16:55:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:55.252 16:55:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:55.252 16:55:45 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:55.252 16:55:45 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.252 16:55:45 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.252 16:55:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.252 ************************************ 00:06:55.252 START TEST accel_assign_opcode 00:06:55.252 ************************************ 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.252 [2024-07-15 16:55:45.254014] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.252 [2024-07-15 16:55:45.261992] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.252 [2024-07-15 16:55:45.324337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.252 software 00:06:55.252 00:06:55.252 real 0m0.297s 00:06:55.252 user 0m0.053s 00:06:55.252 sys 0m0.010s 00:06:55.252 ************************************ 00:06:55.252 END TEST accel_assign_opcode 00:06:55.252 ************************************ 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.252 16:55:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.512 16:55:45 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:55.512 16:55:45 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62002 00:06:55.512 16:55:45 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62002 ']' 00:06:55.512 16:55:45 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62002 00:06:55.512 16:55:45 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:55.512 16:55:45 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.512 16:55:45 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62002 00:06:55.512 killing process with pid 62002 00:06:55.512 16:55:45 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.512 16:55:45 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.512 16:55:45 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62002' 00:06:55.512 16:55:45 accel_rpc -- common/autotest_common.sh@967 -- # kill 62002 00:06:55.512 16:55:45 accel_rpc -- common/autotest_common.sh@972 -- # wait 62002 00:06:55.781 00:06:55.781 real 0m1.882s 00:06:55.781 user 0m1.976s 00:06:55.781 sys 0m0.450s 00:06:55.781 16:55:45 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.781 ************************************ 00:06:55.781 END TEST accel_rpc 00:06:55.781 ************************************ 00:06:55.781 16:55:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.781 16:55:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:55.781 16:55:46 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.781 16:55:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.781 16:55:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.781 16:55:46 -- common/autotest_common.sh@10 -- # set +x 00:06:55.781 ************************************ 00:06:55.781 START TEST app_cmdline 00:06:55.781 ************************************ 00:06:55.781 16:55:46 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:56.040 * Looking for test storage... 00:06:56.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:56.040 16:55:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:56.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.040 16:55:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62095 00:06:56.040 16:55:46 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:56.040 16:55:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62095 00:06:56.040 16:55:46 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62095 ']' 00:06:56.040 16:55:46 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.040 16:55:46 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.040 16:55:46 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.040 16:55:46 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.040 16:55:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.040 [2024-07-15 16:55:46.178851] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:56.040 [2024-07-15 16:55:46.178977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62095 ] 00:06:56.040 [2024-07-15 16:55:46.310885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.298 [2024-07-15 16:55:46.423484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.298 [2024-07-15 16:55:46.476186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.862 16:55:47 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.862 16:55:47 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:56.862 16:55:47 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:57.119 { 00:06:57.119 "version": "SPDK v24.09-pre git sha1 a95bbf233", 00:06:57.119 "fields": { 00:06:57.119 "major": 24, 00:06:57.119 "minor": 9, 00:06:57.119 "patch": 0, 00:06:57.119 "suffix": "-pre", 00:06:57.119 "commit": "a95bbf233" 00:06:57.119 } 00:06:57.119 } 00:06:57.119 16:55:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:57.119 16:55:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:57.119 16:55:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:57.119 16:55:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:57.119 16:55:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:57.119 16:55:47 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.119 16:55:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.119 16:55:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:57.119 16:55:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.375 16:55:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:57.375 16:55:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:57.375 16:55:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:57.375 16:55:47 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.631 request: 00:06:57.631 { 00:06:57.631 "method": "env_dpdk_get_mem_stats", 00:06:57.631 "req_id": 1 00:06:57.631 } 00:06:57.631 Got JSON-RPC error response 00:06:57.631 response: 00:06:57.631 { 00:06:57.631 "code": -32601, 00:06:57.631 "message": "Method not found" 00:06:57.631 } 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.631 16:55:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62095 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62095 ']' 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62095 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62095 00:06:57.631 killing process with pid 62095 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62095' 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@967 -- # kill 62095 00:06:57.631 16:55:47 app_cmdline -- common/autotest_common.sh@972 -- # wait 62095 00:06:57.889 00:06:57.889 real 0m2.069s 00:06:57.889 user 0m2.593s 00:06:57.889 sys 0m0.450s 00:06:57.889 16:55:48 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.889 ************************************ 00:06:57.889 END TEST app_cmdline 00:06:57.889 ************************************ 00:06:57.889 16:55:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.889 16:55:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:57.889 16:55:48 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:57.889 16:55:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.889 16:55:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.889 16:55:48 -- common/autotest_common.sh@10 -- # set +x 00:06:57.889 ************************************ 00:06:57.889 START TEST version 00:06:57.889 ************************************ 00:06:57.889 16:55:48 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:58.146 * Looking for test storage... 00:06:58.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:58.146 16:55:48 version -- app/version.sh@17 -- # get_header_version major 00:06:58.146 16:55:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:58.146 16:55:48 version -- app/version.sh@14 -- # cut -f2 00:06:58.146 16:55:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.146 16:55:48 version -- app/version.sh@17 -- # major=24 00:06:58.146 16:55:48 version -- app/version.sh@18 -- # get_header_version minor 00:06:58.146 16:55:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:58.146 16:55:48 version -- app/version.sh@14 -- # cut -f2 00:06:58.146 16:55:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.146 16:55:48 version -- app/version.sh@18 -- # minor=9 00:06:58.146 16:55:48 version -- app/version.sh@19 -- # get_header_version patch 00:06:58.146 16:55:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:58.146 16:55:48 version -- app/version.sh@14 -- # cut -f2 00:06:58.146 16:55:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.146 16:55:48 version -- app/version.sh@19 -- # patch=0 00:06:58.146 16:55:48 version -- app/version.sh@20 -- # get_header_version suffix 00:06:58.146 16:55:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:58.146 16:55:48 version -- app/version.sh@14 -- # cut -f2 00:06:58.146 16:55:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.146 16:55:48 version -- app/version.sh@20 -- # suffix=-pre 00:06:58.146 16:55:48 version -- app/version.sh@22 -- # version=24.9 00:06:58.146 16:55:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:58.146 16:55:48 version -- app/version.sh@28 -- # version=24.9rc0 00:06:58.146 16:55:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:58.146 16:55:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:58.146 16:55:48 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:58.146 16:55:48 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:58.146 00:06:58.146 real 0m0.144s 00:06:58.146 user 0m0.082s 00:06:58.146 sys 0m0.088s 00:06:58.146 ************************************ 00:06:58.146 END TEST version 00:06:58.146 ************************************ 00:06:58.146 16:55:48 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.146 16:55:48 version -- common/autotest_common.sh@10 -- # set +x 00:06:58.146 16:55:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:58.146 16:55:48 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:58.146 16:55:48 -- spdk/autotest.sh@198 -- # uname -s 00:06:58.146 16:55:48 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:58.146 16:55:48 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:58.146 16:55:48 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:58.146 16:55:48 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:58.146 16:55:48 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:58.146 16:55:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.146 16:55:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.146 16:55:48 -- common/autotest_common.sh@10 -- # set +x 00:06:58.146 ************************************ 00:06:58.146 START TEST spdk_dd 00:06:58.146 ************************************ 00:06:58.146 16:55:48 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:58.146 * Looking for test storage... 00:06:58.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:58.146 16:55:48 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.146 16:55:48 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.146 16:55:48 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.146 16:55:48 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.146 16:55:48 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.146 16:55:48 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.146 16:55:48 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.146 16:55:48 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:58.146 16:55:48 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.146 16:55:48 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:58.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:58.663 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:58.663 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:58.663 16:55:48 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:58.663 16:55:48 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:58.663 16:55:48 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:58.663 16:55:48 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.663 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.664 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:58.665 16:55:48 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:58.665 16:55:48 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:58.665 16:55:48 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:58.665 * spdk_dd linked to liburing 00:06:58.665 16:55:48 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:58.665 16:55:48 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:58.665 16:55:48 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:58.665 16:55:48 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:58.665 16:55:48 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:58.665 16:55:48 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:58.665 16:55:48 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:58.665 16:55:48 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:58.665 16:55:48 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:58.665 16:55:48 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:58.665 16:55:48 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:58.665 16:55:48 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.665 16:55:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:58.665 ************************************ 00:06:58.665 START TEST spdk_dd_basic_rw 00:06:58.665 ************************************ 00:06:58.665 16:55:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:58.665 * Looking for test storage... 00:06:58.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:58.665 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.665 16:55:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.665 16:55:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.665 16:55:48 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:58.666 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:58.941 16:55:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:58.942 16:55:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:58.942 16:55:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.943 ************************************ 00:06:58.943 START TEST dd_bs_lt_native_bs 00:06:58.943 ************************************ 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.943 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:58.943 { 00:06:58.943 "subsystems": [ 00:06:58.943 { 00:06:58.943 "subsystem": "bdev", 00:06:58.943 "config": [ 00:06:58.943 { 00:06:58.943 "params": { 00:06:58.943 "trtype": "pcie", 00:06:58.943 "traddr": "0000:00:10.0", 00:06:58.943 "name": "Nvme0" 00:06:58.944 }, 00:06:58.944 "method": "bdev_nvme_attach_controller" 00:06:58.944 }, 00:06:58.944 { 00:06:58.944 "method": "bdev_wait_for_examine" 00:06:58.944 } 00:06:58.944 ] 00:06:58.944 } 00:06:58.944 ] 00:06:58.944 } 00:06:59.220 [2024-07-15 16:55:49.228469] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:59.220 [2024-07-15 16:55:49.228612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62415 ] 00:06:59.220 [2024-07-15 16:55:49.370559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.220 [2024-07-15 16:55:49.490792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.478 [2024-07-15 16:55:49.545938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.478 [2024-07-15 16:55:49.652226] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:59.478 [2024-07-15 16:55:49.652293] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.478 [2024-07-15 16:55:49.773270] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.737 00:06:59.737 real 0m0.713s 00:06:59.737 user 0m0.498s 00:06:59.737 sys 0m0.165s 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:59.737 ************************************ 00:06:59.737 END TEST dd_bs_lt_native_bs 00:06:59.737 ************************************ 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.737 ************************************ 00:06:59.737 START TEST dd_rw 00:06:59.737 ************************************ 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:59.737 16:55:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:00.304 16:55:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:00.304 16:55:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:00.304 16:55:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:00.304 16:55:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:00.304 [2024-07-15 16:55:50.559377] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:00.304 [2024-07-15 16:55:50.559461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62454 ] 00:07:00.304 { 00:07:00.304 "subsystems": [ 00:07:00.304 { 00:07:00.304 "subsystem": "bdev", 00:07:00.304 "config": [ 00:07:00.304 { 00:07:00.304 "params": { 00:07:00.304 "trtype": "pcie", 00:07:00.304 "traddr": "0000:00:10.0", 00:07:00.304 "name": "Nvme0" 00:07:00.304 }, 00:07:00.304 "method": "bdev_nvme_attach_controller" 00:07:00.304 }, 00:07:00.304 { 00:07:00.304 "method": "bdev_wait_for_examine" 00:07:00.304 } 00:07:00.304 ] 00:07:00.304 } 00:07:00.304 ] 00:07:00.304 } 00:07:00.562 [2024-07-15 16:55:50.695269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.562 [2024-07-15 16:55:50.811617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.820 [2024-07-15 16:55:50.865468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.079  Copying: 60/60 [kB] (average 29 MBps) 00:07:01.079 00:07:01.079 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:01.079 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:01.079 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.079 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.079 { 00:07:01.079 "subsystems": [ 00:07:01.079 { 00:07:01.079 "subsystem": "bdev", 00:07:01.079 "config": [ 00:07:01.079 { 00:07:01.079 "params": { 00:07:01.079 "trtype": "pcie", 00:07:01.079 "traddr": "0000:00:10.0", 00:07:01.079 "name": "Nvme0" 00:07:01.079 }, 00:07:01.079 "method": "bdev_nvme_attach_controller" 00:07:01.079 }, 00:07:01.079 { 00:07:01.079 "method": "bdev_wait_for_examine" 00:07:01.079 } 00:07:01.079 ] 00:07:01.079 } 00:07:01.079 ] 00:07:01.079 } 00:07:01.079 [2024-07-15 16:55:51.275310] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:01.079 [2024-07-15 16:55:51.275792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62467 ] 00:07:01.336 [2024-07-15 16:55:51.423753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.336 [2024-07-15 16:55:51.540264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.336 [2024-07-15 16:55:51.594655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.854  Copying: 60/60 [kB] (average 29 MBps) 00:07:01.854 00:07:01.854 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.854 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:01.854 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:01.854 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:01.854 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:01.854 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:01.854 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:01.854 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:01.854 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:01.854 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.854 16:55:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.854 [2024-07-15 16:55:51.993556] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:01.854 [2024-07-15 16:55:51.994793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62483 ] 00:07:01.854 { 00:07:01.854 "subsystems": [ 00:07:01.854 { 00:07:01.854 "subsystem": "bdev", 00:07:01.854 "config": [ 00:07:01.854 { 00:07:01.854 "params": { 00:07:01.854 "trtype": "pcie", 00:07:01.854 "traddr": "0000:00:10.0", 00:07:01.854 "name": "Nvme0" 00:07:01.854 }, 00:07:01.854 "method": "bdev_nvme_attach_controller" 00:07:01.854 }, 00:07:01.854 { 00:07:01.854 "method": "bdev_wait_for_examine" 00:07:01.854 } 00:07:01.854 ] 00:07:01.854 } 00:07:01.854 ] 00:07:01.854 } 00:07:01.854 [2024-07-15 16:55:52.141247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.114 [2024-07-15 16:55:52.259159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.114 [2024-07-15 16:55:52.312723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.373  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:02.373 00:07:02.373 16:55:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:02.373 16:55:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:02.373 16:55:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:02.373 16:55:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:02.373 16:55:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:02.373 16:55:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:02.373 16:55:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.308 16:55:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:03.308 16:55:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:03.308 16:55:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:03.308 16:55:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.308 [2024-07-15 16:55:53.343737] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:03.308 [2024-07-15 16:55:53.344608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62507 ] 00:07:03.308 { 00:07:03.308 "subsystems": [ 00:07:03.308 { 00:07:03.308 "subsystem": "bdev", 00:07:03.308 "config": [ 00:07:03.308 { 00:07:03.308 "params": { 00:07:03.308 "trtype": "pcie", 00:07:03.308 "traddr": "0000:00:10.0", 00:07:03.308 "name": "Nvme0" 00:07:03.308 }, 00:07:03.308 "method": "bdev_nvme_attach_controller" 00:07:03.308 }, 00:07:03.308 { 00:07:03.308 "method": "bdev_wait_for_examine" 00:07:03.308 } 00:07:03.308 ] 00:07:03.308 } 00:07:03.308 ] 00:07:03.308 } 00:07:03.308 [2024-07-15 16:55:53.488564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.567 [2024-07-15 16:55:53.606848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.567 [2024-07-15 16:55:53.661220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.825  Copying: 60/60 [kB] (average 58 MBps) 00:07:03.825 00:07:03.825 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:03.825 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:03.825 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:03.825 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.825 [2024-07-15 16:55:54.053442] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:03.825 [2024-07-15 16:55:54.053543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62521 ] 00:07:03.825 { 00:07:03.825 "subsystems": [ 00:07:03.825 { 00:07:03.825 "subsystem": "bdev", 00:07:03.825 "config": [ 00:07:03.825 { 00:07:03.825 "params": { 00:07:03.825 "trtype": "pcie", 00:07:03.825 "traddr": "0000:00:10.0", 00:07:03.825 "name": "Nvme0" 00:07:03.825 }, 00:07:03.825 "method": "bdev_nvme_attach_controller" 00:07:03.825 }, 00:07:03.825 { 00:07:03.825 "method": "bdev_wait_for_examine" 00:07:03.825 } 00:07:03.825 ] 00:07:03.825 } 00:07:03.825 ] 00:07:03.825 } 00:07:04.084 [2024-07-15 16:55:54.191798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.084 [2024-07-15 16:55:54.303817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.084 [2024-07-15 16:55:54.356645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.601  Copying: 60/60 [kB] (average 58 MBps) 00:07:04.601 00:07:04.601 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.601 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:04.601 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:04.601 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:04.601 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:04.601 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:04.601 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:04.601 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:04.601 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:04.601 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.601 16:55:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.601 [2024-07-15 16:55:54.741465] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:04.601 [2024-07-15 16:55:54.741564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62542 ] 00:07:04.601 { 00:07:04.601 "subsystems": [ 00:07:04.601 { 00:07:04.601 "subsystem": "bdev", 00:07:04.602 "config": [ 00:07:04.602 { 00:07:04.602 "params": { 00:07:04.602 "trtype": "pcie", 00:07:04.602 "traddr": "0000:00:10.0", 00:07:04.602 "name": "Nvme0" 00:07:04.602 }, 00:07:04.602 "method": "bdev_nvme_attach_controller" 00:07:04.602 }, 00:07:04.602 { 00:07:04.602 "method": "bdev_wait_for_examine" 00:07:04.602 } 00:07:04.602 ] 00:07:04.602 } 00:07:04.602 ] 00:07:04.602 } 00:07:04.602 [2024-07-15 16:55:54.880639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.861 [2024-07-15 16:55:54.992882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.861 [2024-07-15 16:55:55.046442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.119  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:05.119 00:07:05.119 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:05.119 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:05.119 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:05.119 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:05.119 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:05.119 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:05.119 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:05.119 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.687 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:05.687 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:05.687 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:05.687 16:55:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.946 [2024-07-15 16:55:56.021014] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:05.946 [2024-07-15 16:55:56.022036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62561 ] 00:07:05.946 { 00:07:05.946 "subsystems": [ 00:07:05.946 { 00:07:05.946 "subsystem": "bdev", 00:07:05.946 "config": [ 00:07:05.946 { 00:07:05.946 "params": { 00:07:05.946 "trtype": "pcie", 00:07:05.946 "traddr": "0000:00:10.0", 00:07:05.946 "name": "Nvme0" 00:07:05.946 }, 00:07:05.946 "method": "bdev_nvme_attach_controller" 00:07:05.946 }, 00:07:05.946 { 00:07:05.946 "method": "bdev_wait_for_examine" 00:07:05.946 } 00:07:05.946 ] 00:07:05.946 } 00:07:05.946 ] 00:07:05.946 } 00:07:05.946 [2024-07-15 16:55:56.170209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.205 [2024-07-15 16:55:56.324682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.205 [2024-07-15 16:55:56.379182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.464  Copying: 56/56 [kB] (average 54 MBps) 00:07:06.464 00:07:06.464 16:55:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:06.464 16:55:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:06.464 16:55:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.464 16:55:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.464 [2024-07-15 16:55:56.747785] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:06.464 [2024-07-15 16:55:56.747878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62580 ] 00:07:06.464 { 00:07:06.464 "subsystems": [ 00:07:06.464 { 00:07:06.464 "subsystem": "bdev", 00:07:06.464 "config": [ 00:07:06.464 { 00:07:06.464 "params": { 00:07:06.464 "trtype": "pcie", 00:07:06.464 "traddr": "0000:00:10.0", 00:07:06.464 "name": "Nvme0" 00:07:06.464 }, 00:07:06.464 "method": "bdev_nvme_attach_controller" 00:07:06.464 }, 00:07:06.464 { 00:07:06.464 "method": "bdev_wait_for_examine" 00:07:06.464 } 00:07:06.464 ] 00:07:06.464 } 00:07:06.464 ] 00:07:06.464 } 00:07:06.723 [2024-07-15 16:55:56.884264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.723 [2024-07-15 16:55:57.011800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.982 [2024-07-15 16:55:57.067823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.240  Copying: 56/56 [kB] (average 27 MBps) 00:07:07.240 00:07:07.240 16:55:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.240 16:55:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:07.240 16:55:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:07.240 16:55:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:07.240 16:55:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:07.240 16:55:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:07.240 16:55:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:07.240 16:55:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:07.240 16:55:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:07.240 16:55:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:07.240 16:55:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.241 [2024-07-15 16:55:57.467558] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:07.241 { 00:07:07.241 "subsystems": [ 00:07:07.241 { 00:07:07.241 "subsystem": "bdev", 00:07:07.241 "config": [ 00:07:07.241 { 00:07:07.241 "params": { 00:07:07.241 "trtype": "pcie", 00:07:07.241 "traddr": "0000:00:10.0", 00:07:07.241 "name": "Nvme0" 00:07:07.241 }, 00:07:07.241 "method": "bdev_nvme_attach_controller" 00:07:07.241 }, 00:07:07.241 { 00:07:07.241 "method": "bdev_wait_for_examine" 00:07:07.241 } 00:07:07.241 ] 00:07:07.241 } 00:07:07.241 ] 00:07:07.241 } 00:07:07.241 [2024-07-15 16:55:57.467655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62595 ] 00:07:07.499 [2024-07-15 16:55:57.601924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.499 [2024-07-15 16:55:57.716313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.499 [2024-07-15 16:55:57.770461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.016  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:08.016 00:07:08.016 16:55:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:08.016 16:55:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:08.016 16:55:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:08.016 16:55:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:08.016 16:55:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:08.016 16:55:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:08.016 16:55:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.586 16:55:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:08.586 16:55:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:08.586 16:55:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.586 16:55:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.586 { 00:07:08.586 "subsystems": [ 00:07:08.586 { 00:07:08.586 "subsystem": "bdev", 00:07:08.586 "config": [ 00:07:08.586 { 00:07:08.586 "params": { 00:07:08.586 "trtype": "pcie", 00:07:08.586 "traddr": "0000:00:10.0", 00:07:08.586 "name": "Nvme0" 00:07:08.586 }, 00:07:08.586 "method": "bdev_nvme_attach_controller" 00:07:08.586 }, 00:07:08.586 { 00:07:08.586 "method": "bdev_wait_for_examine" 00:07:08.586 } 00:07:08.586 ] 00:07:08.586 } 00:07:08.586 ] 00:07:08.586 } 00:07:08.586 [2024-07-15 16:55:58.705834] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:08.586 [2024-07-15 16:55:58.705920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62620 ] 00:07:08.586 [2024-07-15 16:55:58.841515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.846 [2024-07-15 16:55:58.976246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.846 [2024-07-15 16:55:59.031618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.104  Copying: 56/56 [kB] (average 54 MBps) 00:07:09.104 00:07:09.104 16:55:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:09.104 16:55:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:09.104 16:55:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:09.104 16:55:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.361 [2024-07-15 16:55:59.414422] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:09.361 [2024-07-15 16:55:59.414516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62628 ] 00:07:09.361 { 00:07:09.361 "subsystems": [ 00:07:09.361 { 00:07:09.361 "subsystem": "bdev", 00:07:09.361 "config": [ 00:07:09.361 { 00:07:09.361 "params": { 00:07:09.361 "trtype": "pcie", 00:07:09.361 "traddr": "0000:00:10.0", 00:07:09.361 "name": "Nvme0" 00:07:09.361 }, 00:07:09.361 "method": "bdev_nvme_attach_controller" 00:07:09.361 }, 00:07:09.361 { 00:07:09.361 "method": "bdev_wait_for_examine" 00:07:09.361 } 00:07:09.361 ] 00:07:09.361 } 00:07:09.361 ] 00:07:09.361 } 00:07:09.361 [2024-07-15 16:55:59.549301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.619 [2024-07-15 16:55:59.676583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.619 [2024-07-15 16:55:59.735033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.876  Copying: 56/56 [kB] (average 54 MBps) 00:07:09.876 00:07:09.876 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.876 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:09.876 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:09.876 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:09.876 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:09.876 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:09.876 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:09.876 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:09.876 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:09.876 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:09.876 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.876 { 00:07:09.876 "subsystems": [ 00:07:09.876 { 00:07:09.876 "subsystem": "bdev", 00:07:09.876 "config": [ 00:07:09.876 { 00:07:09.876 "params": { 00:07:09.876 "trtype": "pcie", 00:07:09.876 "traddr": "0000:00:10.0", 00:07:09.876 "name": "Nvme0" 00:07:09.876 }, 00:07:09.876 "method": "bdev_nvme_attach_controller" 00:07:09.876 }, 00:07:09.876 { 00:07:09.876 "method": "bdev_wait_for_examine" 00:07:09.876 } 00:07:09.876 ] 00:07:09.876 } 00:07:09.876 ] 00:07:09.876 } 00:07:09.876 [2024-07-15 16:56:00.138765] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:09.876 [2024-07-15 16:56:00.138863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62649 ] 00:07:10.134 [2024-07-15 16:56:00.275665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.134 [2024-07-15 16:56:00.391403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.393 [2024-07-15 16:56:00.448271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.651  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:10.651 00:07:10.651 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:10.651 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:10.651 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:10.651 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:10.651 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:10.651 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:10.651 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:10.651 16:56:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.218 16:56:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:11.218 16:56:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:11.218 16:56:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.218 16:56:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.218 [2024-07-15 16:56:01.352058] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:11.218 [2024-07-15 16:56:01.352161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62668 ] 00:07:11.218 { 00:07:11.218 "subsystems": [ 00:07:11.218 { 00:07:11.218 "subsystem": "bdev", 00:07:11.218 "config": [ 00:07:11.218 { 00:07:11.218 "params": { 00:07:11.218 "trtype": "pcie", 00:07:11.218 "traddr": "0000:00:10.0", 00:07:11.218 "name": "Nvme0" 00:07:11.218 }, 00:07:11.218 "method": "bdev_nvme_attach_controller" 00:07:11.218 }, 00:07:11.218 { 00:07:11.218 "method": "bdev_wait_for_examine" 00:07:11.218 } 00:07:11.218 ] 00:07:11.218 } 00:07:11.218 ] 00:07:11.218 } 00:07:11.218 [2024-07-15 16:56:01.491690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.477 [2024-07-15 16:56:01.611845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.477 [2024-07-15 16:56:01.669594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.735  Copying: 48/48 [kB] (average 46 MBps) 00:07:11.735 00:07:11.735 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:11.735 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:11.735 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.735 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.993 [2024-07-15 16:56:02.050771] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:11.994 [2024-07-15 16:56:02.050864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62687 ] 00:07:11.994 { 00:07:11.994 "subsystems": [ 00:07:11.994 { 00:07:11.994 "subsystem": "bdev", 00:07:11.994 "config": [ 00:07:11.994 { 00:07:11.994 "params": { 00:07:11.994 "trtype": "pcie", 00:07:11.994 "traddr": "0000:00:10.0", 00:07:11.994 "name": "Nvme0" 00:07:11.994 }, 00:07:11.994 "method": "bdev_nvme_attach_controller" 00:07:11.994 }, 00:07:11.994 { 00:07:11.994 "method": "bdev_wait_for_examine" 00:07:11.994 } 00:07:11.994 ] 00:07:11.994 } 00:07:11.994 ] 00:07:11.994 } 00:07:11.994 [2024-07-15 16:56:02.186689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.252 [2024-07-15 16:56:02.305275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.252 [2024-07-15 16:56:02.359341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.509  Copying: 48/48 [kB] (average 46 MBps) 00:07:12.509 00:07:12.509 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.509 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:12.509 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:12.509 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:12.509 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:12.509 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:12.509 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:12.509 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:12.509 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:12.509 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.509 16:56:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.509 [2024-07-15 16:56:02.734972] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:12.509 [2024-07-15 16:56:02.735067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62708 ] 00:07:12.509 { 00:07:12.509 "subsystems": [ 00:07:12.509 { 00:07:12.509 "subsystem": "bdev", 00:07:12.509 "config": [ 00:07:12.509 { 00:07:12.509 "params": { 00:07:12.509 "trtype": "pcie", 00:07:12.509 "traddr": "0000:00:10.0", 00:07:12.509 "name": "Nvme0" 00:07:12.509 }, 00:07:12.509 "method": "bdev_nvme_attach_controller" 00:07:12.509 }, 00:07:12.509 { 00:07:12.509 "method": "bdev_wait_for_examine" 00:07:12.509 } 00:07:12.509 ] 00:07:12.509 } 00:07:12.509 ] 00:07:12.509 } 00:07:12.767 [2024-07-15 16:56:02.872301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.767 [2024-07-15 16:56:02.999402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.767 [2024-07-15 16:56:03.059235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.283  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:13.283 00:07:13.283 16:56:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:13.283 16:56:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:13.283 16:56:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:13.283 16:56:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:13.283 16:56:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:13.283 16:56:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:13.283 16:56:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.540 16:56:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:13.798 16:56:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:13.798 16:56:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.798 16:56:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.798 [2024-07-15 16:56:03.892109] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:13.798 [2024-07-15 16:56:03.892677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62727 ] 00:07:13.798 { 00:07:13.798 "subsystems": [ 00:07:13.798 { 00:07:13.798 "subsystem": "bdev", 00:07:13.798 "config": [ 00:07:13.798 { 00:07:13.798 "params": { 00:07:13.798 "trtype": "pcie", 00:07:13.798 "traddr": "0000:00:10.0", 00:07:13.798 "name": "Nvme0" 00:07:13.798 }, 00:07:13.798 "method": "bdev_nvme_attach_controller" 00:07:13.798 }, 00:07:13.798 { 00:07:13.798 "method": "bdev_wait_for_examine" 00:07:13.798 } 00:07:13.798 ] 00:07:13.798 } 00:07:13.798 ] 00:07:13.798 } 00:07:13.798 [2024-07-15 16:56:04.032244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.055 [2024-07-15 16:56:04.146846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.055 [2024-07-15 16:56:04.202020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.312  Copying: 48/48 [kB] (average 46 MBps) 00:07:14.312 00:07:14.312 16:56:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:14.312 16:56:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:14.312 16:56:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.312 16:56:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.312 [2024-07-15 16:56:04.590537] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:14.313 [2024-07-15 16:56:04.590640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62735 ] 00:07:14.313 { 00:07:14.313 "subsystems": [ 00:07:14.313 { 00:07:14.313 "subsystem": "bdev", 00:07:14.313 "config": [ 00:07:14.313 { 00:07:14.313 "params": { 00:07:14.313 "trtype": "pcie", 00:07:14.313 "traddr": "0000:00:10.0", 00:07:14.313 "name": "Nvme0" 00:07:14.313 }, 00:07:14.313 "method": "bdev_nvme_attach_controller" 00:07:14.313 }, 00:07:14.313 { 00:07:14.313 "method": "bdev_wait_for_examine" 00:07:14.313 } 00:07:14.313 ] 00:07:14.313 } 00:07:14.313 ] 00:07:14.313 } 00:07:14.570 [2024-07-15 16:56:04.731269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.570 [2024-07-15 16:56:04.860671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.827 [2024-07-15 16:56:04.921274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.084  Copying: 48/48 [kB] (average 46 MBps) 00:07:15.084 00:07:15.084 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.085 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:15.085 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:15.085 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:15.085 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:15.085 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:15.085 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:15.085 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:15.085 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:15.085 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.085 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.085 [2024-07-15 16:56:05.305043] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:15.085 [2024-07-15 16:56:05.305626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62757 ] 00:07:15.085 { 00:07:15.085 "subsystems": [ 00:07:15.085 { 00:07:15.085 "subsystem": "bdev", 00:07:15.085 "config": [ 00:07:15.085 { 00:07:15.085 "params": { 00:07:15.085 "trtype": "pcie", 00:07:15.085 "traddr": "0000:00:10.0", 00:07:15.085 "name": "Nvme0" 00:07:15.085 }, 00:07:15.085 "method": "bdev_nvme_attach_controller" 00:07:15.085 }, 00:07:15.085 { 00:07:15.085 "method": "bdev_wait_for_examine" 00:07:15.085 } 00:07:15.085 ] 00:07:15.085 } 00:07:15.085 ] 00:07:15.085 } 00:07:15.342 [2024-07-15 16:56:05.443180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.342 [2024-07-15 16:56:05.571456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.342 [2024-07-15 16:56:05.631225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.858  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:15.858 00:07:15.858 00:07:15.858 real 0m16.060s 00:07:15.858 user 0m12.005s 00:07:15.858 sys 0m5.534s 00:07:15.858 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.858 16:56:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.858 ************************************ 00:07:15.858 END TEST dd_rw 00:07:15.858 ************************************ 00:07:15.858 16:56:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:15.858 16:56:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:15.858 16:56:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.858 16:56:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.858 16:56:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.858 ************************************ 00:07:15.858 START TEST dd_rw_offset 00:07:15.858 ************************************ 00:07:15.858 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:07:15.858 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:15.858 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:15.859 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:15.859 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:15.859 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:15.859 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=6leke5se323dsj0qj7rdgrr776bzly8pwsv50tc60c1jixx5bpxufnqz5czkql12oslxq5g7iq6oy0egyqo01oexdw3xq0pjo4tmiv7bn1b3px07ptcdmltstknom602izijervz25nsttvkmjbybfckughh2ff9lj2jzav4sgpfz93wv3nsg31b6jxw7r01hsfxf8kzes4xpprjnj1xsuok618ezcpnjkndfvhrendopsw3lmee82hw7yrnymcmiq13f3xkmza916wfokscjh8z635lq5z5ri4sadd09do5xva57vgl6740guw4iq71fsl86w8ovg3rzuobhttf9f46vcg4nencibu3dpel2k4iv0h5d33bccnxnhf5h6r9h15h0j61wsrlsdu81h5cuukoqqh17iblliwltntu93nc5rgbdqz4loh4qboujhx3krur44enls30xochlkj9jveo5jflopvme0a8ulxo6c6p1ym8fc4erf244okdj4ve5wi27fksymc21ar43rd7w1ysu8s1dm0qyjucjfrtqk8z9l0r3xhx1mz9lansilelhj7g3zx6hmsw5viwwc5ok8a10y43q411opl9xxgeya3dlcj8cd1iu9y13cxi6n6fnz64xzopsjs5zpcg43lnma3bw0hwq5rhlsyo78u58hipsb56uvr9lskbybgs17liah90p6uqdia8ykj2vlni96gy4em6kxd443977a2asy76h1ag0tirzablprecqu1qt4040punad8d2bbcecy1mkkdxecjw65olfaedq1ugc8z8m487y3k44a49k8ndxlbps3sbwg12qnzlcgdosb5huomi7uamyj4rl346czenamwg0xikjs8oopzu61oe572msp8fsi4z5m577krnmwloopj5n9q9ikjj4nz3cg8iekcylpecphf71xzfqox3ioc2l4o4zah9q1yqkfp238ana0pvq0ddc5g9zlm8p89c6raj3jvuw7ol5a9mbz58yc5qs9osvhqlu4ikxlzq7351o6lpbj5z0ofo7b9vfxf33abizr7be0ba33zban89b7s64mc0fmikxng0q9zlljjq8mqdpgad8zq1mqitm7pvv6jrhqcouy41zcbql939e5g0gw0eu9t2y0bxfj5gzcjrd91njinrq03e6yu719c6i7chpvxjgcmjoezqy7kydv3ouqex9qdnhile4kuezqic7e3hgsinicrbwhnpb03jwl9zor5ntbm67oz0qmf7646phf4xgdf8arzj3zqevhz5zgumeunbbijswtu5ctwwwvhmsks8j846jlxmbrn55vd3kiumfbjcb5ryrwy44qenneyzvuehhzqppk89145iapxjyh1qci91jja0twyhbbh2mh46tgt211el4zvyiu882aer93zfjz0oayj3h1tnmxwbydhu7pizo9vfhee5fur414lslfdpeo9mugu5v94ru4y6m0zrtphbgq2hn9t6b439vz1x8bnwrlqen3nozp2xv6uq3ga637cdcgyiidl70bgfmdg7gq9rv46o00sewjafi0wqlbc3mspc0t615eapgkx3b2mzu7p7zco6qtjognsim1nma0z7kc7z7gkv1z4tynnf4n54ug576jctp2tpbbfi5frrhj199965rp0tniypghz3pkmamypjkeyb9imhip8xa7dltz7244pff3hylodht46o4i1p86zmyclvybhdhu2coacsic2zdreqgiiyo4qhor5oz88sl3i3l97sumydj5wgqzlpzdr6tezndw15vf0rcslt1ghi9e8kylc634j2eajy7sjbnfmesm85460nwpbcdfg4gll3cpr43yv1vw6b2vjzokdsyco3c23u0nnd9v4a5w4cf0pobuc66qtfj1n4hjs0v4i5u98vlao6d6goopivj0h6oknr1ioc0l3yzr61k35x1329v5wbwb3jwrdol3dcqnlsx0x1fkpdwlyhc8k6ogt7yyg73ek3pi62s8ek04s4p1cvd89u3313dzf6drcpfn8p0xqpb0pwf1b98re24u29i0fjgb26t5lwu8q4uio3bgvhbnxia8ayyet1cv3tlsvpzic238w8xroq6os2o1h06ulfopucszgbn7n58yvtbmy0otexpk0txpng26g2sjkpxs52vdn1ctjbr4urgr61wqw2dbl12u970xx0iimthcgwfuq8uqmuedsrz28x63am6k7delbtuv5zvgprhjbra1g0utkoc0xv1mcv01cbw5g6n1xqezy9jrxmq5azasiyqldxol7adcwy4m0uakfapecry7uzbpavccrybqk56b1djnw3sq2d5hy6hllmyncy5383yq0l3vebm0svtwnzfiki597l8o2no6wzcgkl063rah4gjkjqi1bf584851palq07794921rrp8eqzh7gzwv3u8tyhycanha1htliyyoawm06ai57dur3dvzzc5jwd7ks5kmad1osju4oti21cjqw6el9gxeht8o10o6vdftr5307c13ejtndx6awbcst4sqxnt0738kqipvy1j039e4x3mad2jxq4xc1259wsq7bmf8520moykvvovnxgqxo9pel9omn8sgh115ef58kqi0izxb4yr3udf6etxwalr4i4fxlh5lbkvv67l4lfwjqyyo5lhr242mifw6sho0brvjhpa4em424usj5fvddsp1oyclzvcfn57zhy3nb0mj946q43nis3vnudizm1o3kqhbptd2h5ajn889lonk7si9p1j77gs3wn3t3j1zg4x7hkiwl14nxephrttadw93eewrhr3jisbc61gl2q3za1eks17fhajbr2ohia7wwcaqpjih8vz9bk20f0h4o551bjtddr29bko8l6lwvv0z00unqkh3h33yfsh1tan657fjdr15q5402vn8umza5ol14htbbdgit2c78rs9138a2h0udvrkeusr1w7kukuqvyhur3m3kcnetqnhu8lpbbl9nvn7zqlfbs5xx2xfyecx5wqupreov9pfuo6h1d0olmltaa9o3okp2njn0wjrzu3h497eg7hdh2hj3uce2uj9lvnd7j7x6n6xxytbr89r2wz8ntoe9pqcn0vrpti7g2u0zgribqrx644zxga399ffijhu1lgyfyzzw9ml5uldkc5419chyspj1xnbd7qpum5kv64jv7tfwudta49ds5gku4ccfsbrwml5ro7963blh1psv204vnijhxrdz72jrt7w54wz84jdchz73n5lz4ysi8feog0vxp94goly6m4qf891dc9iufp55rrasm787qj80biljr3v7fgntixjuzy04cz1yqg6ot7yyc3pj3cwv2msouduiqldvl8mg96qqrzosducpnkjbqag8wd6xy99gny73p8a8cx9t19kzyflpwyrl3kfxownafnhn46ifd00ihl9xop4t0rwljt0e1ccvihow8zygiywjpnml268d1jj1l6nrabe5pyb6p0jn3j3688amqwzd61lj8jtq7ck1hw1t925swgevux78ov6n1y5lucoew6wwue2dp8sy9h13e8vh3rjqyfjlyzogrgn7w9t4hv8cfuco7gsp4wd7zphak3mlvi2uam0q7nnwipjlh5sv9sgjpfz1a4tv2e0n756t8nqz10ev9fcho26lkzc7qpzqzb78h39o5fhqdilc3gdqnjcmnmgfuprln9prtehenx9xvacrtjh5qmsriqfwnm04af34g22ppcxnxbpzhgbdyttlugq9iqo04xi2nj4luomn4hc167h9lcokg3flff3ab6xhqccrtzvlz114gfkqf8zx4udbmjx2wkbpirzy5vtc5inu23sjp2bzz1vllab0u2s2vzlor6es737fdunfg3v4nd0csx09hnakgisydvsyp7jyh2k0e0u6dakjvttm7wsruzl45z6jd5rom3kl9how1m68e0pmvazupckxddnd3gy1eubitpgttxbw4lau7lg6ox66e6r3nq1dwkulf4zwrt80utrbgg3f2edkkz5 00:07:15.859 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:15.859 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:15.859 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:15.859 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:15.859 { 00:07:15.859 "subsystems": [ 00:07:15.859 { 00:07:15.859 "subsystem": "bdev", 00:07:15.859 "config": [ 00:07:15.859 { 00:07:15.859 "params": { 00:07:15.859 "trtype": "pcie", 00:07:15.859 "traddr": "0000:00:10.0", 00:07:15.859 "name": "Nvme0" 00:07:15.859 }, 00:07:15.859 "method": "bdev_nvme_attach_controller" 00:07:15.859 }, 00:07:15.859 { 00:07:15.859 "method": "bdev_wait_for_examine" 00:07:15.859 } 00:07:15.859 ] 00:07:15.859 } 00:07:15.859 ] 00:07:15.859 } 00:07:15.859 [2024-07-15 16:56:06.135303] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:15.859 [2024-07-15 16:56:06.135456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62793 ] 00:07:16.117 [2024-07-15 16:56:06.276638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.117 [2024-07-15 16:56:06.391787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.375 [2024-07-15 16:56:06.448283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.632  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:16.632 00:07:16.632 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:16.632 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:16.632 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:16.632 16:56:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:16.632 { 00:07:16.632 "subsystems": [ 00:07:16.632 { 00:07:16.632 "subsystem": "bdev", 00:07:16.632 "config": [ 00:07:16.632 { 00:07:16.632 "params": { 00:07:16.632 "trtype": "pcie", 00:07:16.632 "traddr": "0000:00:10.0", 00:07:16.632 "name": "Nvme0" 00:07:16.632 }, 00:07:16.632 "method": "bdev_nvme_attach_controller" 00:07:16.632 }, 00:07:16.632 { 00:07:16.632 "method": "bdev_wait_for_examine" 00:07:16.632 } 00:07:16.632 ] 00:07:16.632 } 00:07:16.632 ] 00:07:16.632 } 00:07:16.632 [2024-07-15 16:56:06.842647] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:16.632 [2024-07-15 16:56:06.842746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62807 ] 00:07:16.889 [2024-07-15 16:56:06.981742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.889 [2024-07-15 16:56:07.098149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.889 [2024-07-15 16:56:07.153899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.405  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:17.405 00:07:17.405 ************************************ 00:07:17.405 END TEST dd_rw_offset 00:07:17.405 ************************************ 00:07:17.405 16:56:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 6leke5se323dsj0qj7rdgrr776bzly8pwsv50tc60c1jixx5bpxufnqz5czkql12oslxq5g7iq6oy0egyqo01oexdw3xq0pjo4tmiv7bn1b3px07ptcdmltstknom602izijervz25nsttvkmjbybfckughh2ff9lj2jzav4sgpfz93wv3nsg31b6jxw7r01hsfxf8kzes4xpprjnj1xsuok618ezcpnjkndfvhrendopsw3lmee82hw7yrnymcmiq13f3xkmza916wfokscjh8z635lq5z5ri4sadd09do5xva57vgl6740guw4iq71fsl86w8ovg3rzuobhttf9f46vcg4nencibu3dpel2k4iv0h5d33bccnxnhf5h6r9h15h0j61wsrlsdu81h5cuukoqqh17iblliwltntu93nc5rgbdqz4loh4qboujhx3krur44enls30xochlkj9jveo5jflopvme0a8ulxo6c6p1ym8fc4erf244okdj4ve5wi27fksymc21ar43rd7w1ysu8s1dm0qyjucjfrtqk8z9l0r3xhx1mz9lansilelhj7g3zx6hmsw5viwwc5ok8a10y43q411opl9xxgeya3dlcj8cd1iu9y13cxi6n6fnz64xzopsjs5zpcg43lnma3bw0hwq5rhlsyo78u58hipsb56uvr9lskbybgs17liah90p6uqdia8ykj2vlni96gy4em6kxd443977a2asy76h1ag0tirzablprecqu1qt4040punad8d2bbcecy1mkkdxecjw65olfaedq1ugc8z8m487y3k44a49k8ndxlbps3sbwg12qnzlcgdosb5huomi7uamyj4rl346czenamwg0xikjs8oopzu61oe572msp8fsi4z5m577krnmwloopj5n9q9ikjj4nz3cg8iekcylpecphf71xzfqox3ioc2l4o4zah9q1yqkfp238ana0pvq0ddc5g9zlm8p89c6raj3jvuw7ol5a9mbz58yc5qs9osvhqlu4ikxlzq7351o6lpbj5z0ofo7b9vfxf33abizr7be0ba33zban89b7s64mc0fmikxng0q9zlljjq8mqdpgad8zq1mqitm7pvv6jrhqcouy41zcbql939e5g0gw0eu9t2y0bxfj5gzcjrd91njinrq03e6yu719c6i7chpvxjgcmjoezqy7kydv3ouqex9qdnhile4kuezqic7e3hgsinicrbwhnpb03jwl9zor5ntbm67oz0qmf7646phf4xgdf8arzj3zqevhz5zgumeunbbijswtu5ctwwwvhmsks8j846jlxmbrn55vd3kiumfbjcb5ryrwy44qenneyzvuehhzqppk89145iapxjyh1qci91jja0twyhbbh2mh46tgt211el4zvyiu882aer93zfjz0oayj3h1tnmxwbydhu7pizo9vfhee5fur414lslfdpeo9mugu5v94ru4y6m0zrtphbgq2hn9t6b439vz1x8bnwrlqen3nozp2xv6uq3ga637cdcgyiidl70bgfmdg7gq9rv46o00sewjafi0wqlbc3mspc0t615eapgkx3b2mzu7p7zco6qtjognsim1nma0z7kc7z7gkv1z4tynnf4n54ug576jctp2tpbbfi5frrhj199965rp0tniypghz3pkmamypjkeyb9imhip8xa7dltz7244pff3hylodht46o4i1p86zmyclvybhdhu2coacsic2zdreqgiiyo4qhor5oz88sl3i3l97sumydj5wgqzlpzdr6tezndw15vf0rcslt1ghi9e8kylc634j2eajy7sjbnfmesm85460nwpbcdfg4gll3cpr43yv1vw6b2vjzokdsyco3c23u0nnd9v4a5w4cf0pobuc66qtfj1n4hjs0v4i5u98vlao6d6goopivj0h6oknr1ioc0l3yzr61k35x1329v5wbwb3jwrdol3dcqnlsx0x1fkpdwlyhc8k6ogt7yyg73ek3pi62s8ek04s4p1cvd89u3313dzf6drcpfn8p0xqpb0pwf1b98re24u29i0fjgb26t5lwu8q4uio3bgvhbnxia8ayyet1cv3tlsvpzic238w8xroq6os2o1h06ulfopucszgbn7n58yvtbmy0otexpk0txpng26g2sjkpxs52vdn1ctjbr4urgr61wqw2dbl12u970xx0iimthcgwfuq8uqmuedsrz28x63am6k7delbtuv5zvgprhjbra1g0utkoc0xv1mcv01cbw5g6n1xqezy9jrxmq5azasiyqldxol7adcwy4m0uakfapecry7uzbpavccrybqk56b1djnw3sq2d5hy6hllmyncy5383yq0l3vebm0svtwnzfiki597l8o2no6wzcgkl063rah4gjkjqi1bf584851palq07794921rrp8eqzh7gzwv3u8tyhycanha1htliyyoawm06ai57dur3dvzzc5jwd7ks5kmad1osju4oti21cjqw6el9gxeht8o10o6vdftr5307c13ejtndx6awbcst4sqxnt0738kqipvy1j039e4x3mad2jxq4xc1259wsq7bmf8520moykvvovnxgqxo9pel9omn8sgh115ef58kqi0izxb4yr3udf6etxwalr4i4fxlh5lbkvv67l4lfwjqyyo5lhr242mifw6sho0brvjhpa4em424usj5fvddsp1oyclzvcfn57zhy3nb0mj946q43nis3vnudizm1o3kqhbptd2h5ajn889lonk7si9p1j77gs3wn3t3j1zg4x7hkiwl14nxephrttadw93eewrhr3jisbc61gl2q3za1eks17fhajbr2ohia7wwcaqpjih8vz9bk20f0h4o551bjtddr29bko8l6lwvv0z00unqkh3h33yfsh1tan657fjdr15q5402vn8umza5ol14htbbdgit2c78rs9138a2h0udvrkeusr1w7kukuqvyhur3m3kcnetqnhu8lpbbl9nvn7zqlfbs5xx2xfyecx5wqupreov9pfuo6h1d0olmltaa9o3okp2njn0wjrzu3h497eg7hdh2hj3uce2uj9lvnd7j7x6n6xxytbr89r2wz8ntoe9pqcn0vrpti7g2u0zgribqrx644zxga399ffijhu1lgyfyzzw9ml5uldkc5419chyspj1xnbd7qpum5kv64jv7tfwudta49ds5gku4ccfsbrwml5ro7963blh1psv204vnijhxrdz72jrt7w54wz84jdchz73n5lz4ysi8feog0vxp94goly6m4qf891dc9iufp55rrasm787qj80biljr3v7fgntixjuzy04cz1yqg6ot7yyc3pj3cwv2msouduiqldvl8mg96qqrzosducpnkjbqag8wd6xy99gny73p8a8cx9t19kzyflpwyrl3kfxownafnhn46ifd00ihl9xop4t0rwljt0e1ccvihow8zygiywjpnml268d1jj1l6nrabe5pyb6p0jn3j3688amqwzd61lj8jtq7ck1hw1t925swgevux78ov6n1y5lucoew6wwue2dp8sy9h13e8vh3rjqyfjlyzogrgn7w9t4hv8cfuco7gsp4wd7zphak3mlvi2uam0q7nnwipjlh5sv9sgjpfz1a4tv2e0n756t8nqz10ev9fcho26lkzc7qpzqzb78h39o5fhqdilc3gdqnjcmnmgfuprln9prtehenx9xvacrtjh5qmsriqfwnm04af34g22ppcxnxbpzhgbdyttlugq9iqo04xi2nj4luomn4hc167h9lcokg3flff3ab6xhqccrtzvlz114gfkqf8zx4udbmjx2wkbpirzy5vtc5inu23sjp2bzz1vllab0u2s2vzlor6es737fdunfg3v4nd0csx09hnakgisydvsyp7jyh2k0e0u6dakjvttm7wsruzl45z6jd5rom3kl9how1m68e0pmvazupckxddnd3gy1eubitpgttxbw4lau7lg6ox66e6r3nq1dwkulf4zwrt80utrbgg3f2edkkz5 == \6\l\e\k\e\5\s\e\3\2\3\d\s\j\0\q\j\7\r\d\g\r\r\7\7\6\b\z\l\y\8\p\w\s\v\5\0\t\c\6\0\c\1\j\i\x\x\5\b\p\x\u\f\n\q\z\5\c\z\k\q\l\1\2\o\s\l\x\q\5\g\7\i\q\6\o\y\0\e\g\y\q\o\0\1\o\e\x\d\w\3\x\q\0\p\j\o\4\t\m\i\v\7\b\n\1\b\3\p\x\0\7\p\t\c\d\m\l\t\s\t\k\n\o\m\6\0\2\i\z\i\j\e\r\v\z\2\5\n\s\t\t\v\k\m\j\b\y\b\f\c\k\u\g\h\h\2\f\f\9\l\j\2\j\z\a\v\4\s\g\p\f\z\9\3\w\v\3\n\s\g\3\1\b\6\j\x\w\7\r\0\1\h\s\f\x\f\8\k\z\e\s\4\x\p\p\r\j\n\j\1\x\s\u\o\k\6\1\8\e\z\c\p\n\j\k\n\d\f\v\h\r\e\n\d\o\p\s\w\3\l\m\e\e\8\2\h\w\7\y\r\n\y\m\c\m\i\q\1\3\f\3\x\k\m\z\a\9\1\6\w\f\o\k\s\c\j\h\8\z\6\3\5\l\q\5\z\5\r\i\4\s\a\d\d\0\9\d\o\5\x\v\a\5\7\v\g\l\6\7\4\0\g\u\w\4\i\q\7\1\f\s\l\8\6\w\8\o\v\g\3\r\z\u\o\b\h\t\t\f\9\f\4\6\v\c\g\4\n\e\n\c\i\b\u\3\d\p\e\l\2\k\4\i\v\0\h\5\d\3\3\b\c\c\n\x\n\h\f\5\h\6\r\9\h\1\5\h\0\j\6\1\w\s\r\l\s\d\u\8\1\h\5\c\u\u\k\o\q\q\h\1\7\i\b\l\l\i\w\l\t\n\t\u\9\3\n\c\5\r\g\b\d\q\z\4\l\o\h\4\q\b\o\u\j\h\x\3\k\r\u\r\4\4\e\n\l\s\3\0\x\o\c\h\l\k\j\9\j\v\e\o\5\j\f\l\o\p\v\m\e\0\a\8\u\l\x\o\6\c\6\p\1\y\m\8\f\c\4\e\r\f\2\4\4\o\k\d\j\4\v\e\5\w\i\2\7\f\k\s\y\m\c\2\1\a\r\4\3\r\d\7\w\1\y\s\u\8\s\1\d\m\0\q\y\j\u\c\j\f\r\t\q\k\8\z\9\l\0\r\3\x\h\x\1\m\z\9\l\a\n\s\i\l\e\l\h\j\7\g\3\z\x\6\h\m\s\w\5\v\i\w\w\c\5\o\k\8\a\1\0\y\4\3\q\4\1\1\o\p\l\9\x\x\g\e\y\a\3\d\l\c\j\8\c\d\1\i\u\9\y\1\3\c\x\i\6\n\6\f\n\z\6\4\x\z\o\p\s\j\s\5\z\p\c\g\4\3\l\n\m\a\3\b\w\0\h\w\q\5\r\h\l\s\y\o\7\8\u\5\8\h\i\p\s\b\5\6\u\v\r\9\l\s\k\b\y\b\g\s\1\7\l\i\a\h\9\0\p\6\u\q\d\i\a\8\y\k\j\2\v\l\n\i\9\6\g\y\4\e\m\6\k\x\d\4\4\3\9\7\7\a\2\a\s\y\7\6\h\1\a\g\0\t\i\r\z\a\b\l\p\r\e\c\q\u\1\q\t\4\0\4\0\p\u\n\a\d\8\d\2\b\b\c\e\c\y\1\m\k\k\d\x\e\c\j\w\6\5\o\l\f\a\e\d\q\1\u\g\c\8\z\8\m\4\8\7\y\3\k\4\4\a\4\9\k\8\n\d\x\l\b\p\s\3\s\b\w\g\1\2\q\n\z\l\c\g\d\o\s\b\5\h\u\o\m\i\7\u\a\m\y\j\4\r\l\3\4\6\c\z\e\n\a\m\w\g\0\x\i\k\j\s\8\o\o\p\z\u\6\1\o\e\5\7\2\m\s\p\8\f\s\i\4\z\5\m\5\7\7\k\r\n\m\w\l\o\o\p\j\5\n\9\q\9\i\k\j\j\4\n\z\3\c\g\8\i\e\k\c\y\l\p\e\c\p\h\f\7\1\x\z\f\q\o\x\3\i\o\c\2\l\4\o\4\z\a\h\9\q\1\y\q\k\f\p\2\3\8\a\n\a\0\p\v\q\0\d\d\c\5\g\9\z\l\m\8\p\8\9\c\6\r\a\j\3\j\v\u\w\7\o\l\5\a\9\m\b\z\5\8\y\c\5\q\s\9\o\s\v\h\q\l\u\4\i\k\x\l\z\q\7\3\5\1\o\6\l\p\b\j\5\z\0\o\f\o\7\b\9\v\f\x\f\3\3\a\b\i\z\r\7\b\e\0\b\a\3\3\z\b\a\n\8\9\b\7\s\6\4\m\c\0\f\m\i\k\x\n\g\0\q\9\z\l\l\j\j\q\8\m\q\d\p\g\a\d\8\z\q\1\m\q\i\t\m\7\p\v\v\6\j\r\h\q\c\o\u\y\4\1\z\c\b\q\l\9\3\9\e\5\g\0\g\w\0\e\u\9\t\2\y\0\b\x\f\j\5\g\z\c\j\r\d\9\1\n\j\i\n\r\q\0\3\e\6\y\u\7\1\9\c\6\i\7\c\h\p\v\x\j\g\c\m\j\o\e\z\q\y\7\k\y\d\v\3\o\u\q\e\x\9\q\d\n\h\i\l\e\4\k\u\e\z\q\i\c\7\e\3\h\g\s\i\n\i\c\r\b\w\h\n\p\b\0\3\j\w\l\9\z\o\r\5\n\t\b\m\6\7\o\z\0\q\m\f\7\6\4\6\p\h\f\4\x\g\d\f\8\a\r\z\j\3\z\q\e\v\h\z\5\z\g\u\m\e\u\n\b\b\i\j\s\w\t\u\5\c\t\w\w\w\v\h\m\s\k\s\8\j\8\4\6\j\l\x\m\b\r\n\5\5\v\d\3\k\i\u\m\f\b\j\c\b\5\r\y\r\w\y\4\4\q\e\n\n\e\y\z\v\u\e\h\h\z\q\p\p\k\8\9\1\4\5\i\a\p\x\j\y\h\1\q\c\i\9\1\j\j\a\0\t\w\y\h\b\b\h\2\m\h\4\6\t\g\t\2\1\1\e\l\4\z\v\y\i\u\8\8\2\a\e\r\9\3\z\f\j\z\0\o\a\y\j\3\h\1\t\n\m\x\w\b\y\d\h\u\7\p\i\z\o\9\v\f\h\e\e\5\f\u\r\4\1\4\l\s\l\f\d\p\e\o\9\m\u\g\u\5\v\9\4\r\u\4\y\6\m\0\z\r\t\p\h\b\g\q\2\h\n\9\t\6\b\4\3\9\v\z\1\x\8\b\n\w\r\l\q\e\n\3\n\o\z\p\2\x\v\6\u\q\3\g\a\6\3\7\c\d\c\g\y\i\i\d\l\7\0\b\g\f\m\d\g\7\g\q\9\r\v\4\6\o\0\0\s\e\w\j\a\f\i\0\w\q\l\b\c\3\m\s\p\c\0\t\6\1\5\e\a\p\g\k\x\3\b\2\m\z\u\7\p\7\z\c\o\6\q\t\j\o\g\n\s\i\m\1\n\m\a\0\z\7\k\c\7\z\7\g\k\v\1\z\4\t\y\n\n\f\4\n\5\4\u\g\5\7\6\j\c\t\p\2\t\p\b\b\f\i\5\f\r\r\h\j\1\9\9\9\6\5\r\p\0\t\n\i\y\p\g\h\z\3\p\k\m\a\m\y\p\j\k\e\y\b\9\i\m\h\i\p\8\x\a\7\d\l\t\z\7\2\4\4\p\f\f\3\h\y\l\o\d\h\t\4\6\o\4\i\1\p\8\6\z\m\y\c\l\v\y\b\h\d\h\u\2\c\o\a\c\s\i\c\2\z\d\r\e\q\g\i\i\y\o\4\q\h\o\r\5\o\z\8\8\s\l\3\i\3\l\9\7\s\u\m\y\d\j\5\w\g\q\z\l\p\z\d\r\6\t\e\z\n\d\w\1\5\v\f\0\r\c\s\l\t\1\g\h\i\9\e\8\k\y\l\c\6\3\4\j\2\e\a\j\y\7\s\j\b\n\f\m\e\s\m\8\5\4\6\0\n\w\p\b\c\d\f\g\4\g\l\l\3\c\p\r\4\3\y\v\1\v\w\6\b\2\v\j\z\o\k\d\s\y\c\o\3\c\2\3\u\0\n\n\d\9\v\4\a\5\w\4\c\f\0\p\o\b\u\c\6\6\q\t\f\j\1\n\4\h\j\s\0\v\4\i\5\u\9\8\v\l\a\o\6\d\6\g\o\o\p\i\v\j\0\h\6\o\k\n\r\1\i\o\c\0\l\3\y\z\r\6\1\k\3\5\x\1\3\2\9\v\5\w\b\w\b\3\j\w\r\d\o\l\3\d\c\q\n\l\s\x\0\x\1\f\k\p\d\w\l\y\h\c\8\k\6\o\g\t\7\y\y\g\7\3\e\k\3\p\i\6\2\s\8\e\k\0\4\s\4\p\1\c\v\d\8\9\u\3\3\1\3\d\z\f\6\d\r\c\p\f\n\8\p\0\x\q\p\b\0\p\w\f\1\b\9\8\r\e\2\4\u\2\9\i\0\f\j\g\b\2\6\t\5\l\w\u\8\q\4\u\i\o\3\b\g\v\h\b\n\x\i\a\8\a\y\y\e\t\1\c\v\3\t\l\s\v\p\z\i\c\2\3\8\w\8\x\r\o\q\6\o\s\2\o\1\h\0\6\u\l\f\o\p\u\c\s\z\g\b\n\7\n\5\8\y\v\t\b\m\y\0\o\t\e\x\p\k\0\t\x\p\n\g\2\6\g\2\s\j\k\p\x\s\5\2\v\d\n\1\c\t\j\b\r\4\u\r\g\r\6\1\w\q\w\2\d\b\l\1\2\u\9\7\0\x\x\0\i\i\m\t\h\c\g\w\f\u\q\8\u\q\m\u\e\d\s\r\z\2\8\x\6\3\a\m\6\k\7\d\e\l\b\t\u\v\5\z\v\g\p\r\h\j\b\r\a\1\g\0\u\t\k\o\c\0\x\v\1\m\c\v\0\1\c\b\w\5\g\6\n\1\x\q\e\z\y\9\j\r\x\m\q\5\a\z\a\s\i\y\q\l\d\x\o\l\7\a\d\c\w\y\4\m\0\u\a\k\f\a\p\e\c\r\y\7\u\z\b\p\a\v\c\c\r\y\b\q\k\5\6\b\1\d\j\n\w\3\s\q\2\d\5\h\y\6\h\l\l\m\y\n\c\y\5\3\8\3\y\q\0\l\3\v\e\b\m\0\s\v\t\w\n\z\f\i\k\i\5\9\7\l\8\o\2\n\o\6\w\z\c\g\k\l\0\6\3\r\a\h\4\g\j\k\j\q\i\1\b\f\5\8\4\8\5\1\p\a\l\q\0\7\7\9\4\9\2\1\r\r\p\8\e\q\z\h\7\g\z\w\v\3\u\8\t\y\h\y\c\a\n\h\a\1\h\t\l\i\y\y\o\a\w\m\0\6\a\i\5\7\d\u\r\3\d\v\z\z\c\5\j\w\d\7\k\s\5\k\m\a\d\1\o\s\j\u\4\o\t\i\2\1\c\j\q\w\6\e\l\9\g\x\e\h\t\8\o\1\0\o\6\v\d\f\t\r\5\3\0\7\c\1\3\e\j\t\n\d\x\6\a\w\b\c\s\t\4\s\q\x\n\t\0\7\3\8\k\q\i\p\v\y\1\j\0\3\9\e\4\x\3\m\a\d\2\j\x\q\4\x\c\1\2\5\9\w\s\q\7\b\m\f\8\5\2\0\m\o\y\k\v\v\o\v\n\x\g\q\x\o\9\p\e\l\9\o\m\n\8\s\g\h\1\1\5\e\f\5\8\k\q\i\0\i\z\x\b\4\y\r\3\u\d\f\6\e\t\x\w\a\l\r\4\i\4\f\x\l\h\5\l\b\k\v\v\6\7\l\4\l\f\w\j\q\y\y\o\5\l\h\r\2\4\2\m\i\f\w\6\s\h\o\0\b\r\v\j\h\p\a\4\e\m\4\2\4\u\s\j\5\f\v\d\d\s\p\1\o\y\c\l\z\v\c\f\n\5\7\z\h\y\3\n\b\0\m\j\9\4\6\q\4\3\n\i\s\3\v\n\u\d\i\z\m\1\o\3\k\q\h\b\p\t\d\2\h\5\a\j\n\8\8\9\l\o\n\k\7\s\i\9\p\1\j\7\7\g\s\3\w\n\3\t\3\j\1\z\g\4\x\7\h\k\i\w\l\1\4\n\x\e\p\h\r\t\t\a\d\w\9\3\e\e\w\r\h\r\3\j\i\s\b\c\6\1\g\l\2\q\3\z\a\1\e\k\s\1\7\f\h\a\j\b\r\2\o\h\i\a\7\w\w\c\a\q\p\j\i\h\8\v\z\9\b\k\2\0\f\0\h\4\o\5\5\1\b\j\t\d\d\r\2\9\b\k\o\8\l\6\l\w\v\v\0\z\0\0\u\n\q\k\h\3\h\3\3\y\f\s\h\1\t\a\n\6\5\7\f\j\d\r\1\5\q\5\4\0\2\v\n\8\u\m\z\a\5\o\l\1\4\h\t\b\b\d\g\i\t\2\c\7\8\r\s\9\1\3\8\a\2\h\0\u\d\v\r\k\e\u\s\r\1\w\7\k\u\k\u\q\v\y\h\u\r\3\m\3\k\c\n\e\t\q\n\h\u\8\l\p\b\b\l\9\n\v\n\7\z\q\l\f\b\s\5\x\x\2\x\f\y\e\c\x\5\w\q\u\p\r\e\o\v\9\p\f\u\o\6\h\1\d\0\o\l\m\l\t\a\a\9\o\3\o\k\p\2\n\j\n\0\w\j\r\z\u\3\h\4\9\7\e\g\7\h\d\h\2\h\j\3\u\c\e\2\u\j\9\l\v\n\d\7\j\7\x\6\n\6\x\x\y\t\b\r\8\9\r\2\w\z\8\n\t\o\e\9\p\q\c\n\0\v\r\p\t\i\7\g\2\u\0\z\g\r\i\b\q\r\x\6\4\4\z\x\g\a\3\9\9\f\f\i\j\h\u\1\l\g\y\f\y\z\z\w\9\m\l\5\u\l\d\k\c\5\4\1\9\c\h\y\s\p\j\1\x\n\b\d\7\q\p\u\m\5\k\v\6\4\j\v\7\t\f\w\u\d\t\a\4\9\d\s\5\g\k\u\4\c\c\f\s\b\r\w\m\l\5\r\o\7\9\6\3\b\l\h\1\p\s\v\2\0\4\v\n\i\j\h\x\r\d\z\7\2\j\r\t\7\w\5\4\w\z\8\4\j\d\c\h\z\7\3\n\5\l\z\4\y\s\i\8\f\e\o\g\0\v\x\p\9\4\g\o\l\y\6\m\4\q\f\8\9\1\d\c\9\i\u\f\p\5\5\r\r\a\s\m\7\8\7\q\j\8\0\b\i\l\j\r\3\v\7\f\g\n\t\i\x\j\u\z\y\0\4\c\z\1\y\q\g\6\o\t\7\y\y\c\3\p\j\3\c\w\v\2\m\s\o\u\d\u\i\q\l\d\v\l\8\m\g\9\6\q\q\r\z\o\s\d\u\c\p\n\k\j\b\q\a\g\8\w\d\6\x\y\9\9\g\n\y\7\3\p\8\a\8\c\x\9\t\1\9\k\z\y\f\l\p\w\y\r\l\3\k\f\x\o\w\n\a\f\n\h\n\4\6\i\f\d\0\0\i\h\l\9\x\o\p\4\t\0\r\w\l\j\t\0\e\1\c\c\v\i\h\o\w\8\z\y\g\i\y\w\j\p\n\m\l\2\6\8\d\1\j\j\1\l\6\n\r\a\b\e\5\p\y\b\6\p\0\j\n\3\j\3\6\8\8\a\m\q\w\z\d\6\1\l\j\8\j\t\q\7\c\k\1\h\w\1\t\9\2\5\s\w\g\e\v\u\x\7\8\o\v\6\n\1\y\5\l\u\c\o\e\w\6\w\w\u\e\2\d\p\8\s\y\9\h\1\3\e\8\v\h\3\r\j\q\y\f\j\l\y\z\o\g\r\g\n\7\w\9\t\4\h\v\8\c\f\u\c\o\7\g\s\p\4\w\d\7\z\p\h\a\k\3\m\l\v\i\2\u\a\m\0\q\7\n\n\w\i\p\j\l\h\5\s\v\9\s\g\j\p\f\z\1\a\4\t\v\2\e\0\n\7\5\6\t\8\n\q\z\1\0\e\v\9\f\c\h\o\2\6\l\k\z\c\7\q\p\z\q\z\b\7\8\h\3\9\o\5\f\h\q\d\i\l\c\3\g\d\q\n\j\c\m\n\m\g\f\u\p\r\l\n\9\p\r\t\e\h\e\n\x\9\x\v\a\c\r\t\j\h\5\q\m\s\r\i\q\f\w\n\m\0\4\a\f\3\4\g\2\2\p\p\c\x\n\x\b\p\z\h\g\b\d\y\t\t\l\u\g\q\9\i\q\o\0\4\x\i\2\n\j\4\l\u\o\m\n\4\h\c\1\6\7\h\9\l\c\o\k\g\3\f\l\f\f\3\a\b\6\x\h\q\c\c\r\t\z\v\l\z\1\1\4\g\f\k\q\f\8\z\x\4\u\d\b\m\j\x\2\w\k\b\p\i\r\z\y\5\v\t\c\5\i\n\u\2\3\s\j\p\2\b\z\z\1\v\l\l\a\b\0\u\2\s\2\v\z\l\o\r\6\e\s\7\3\7\f\d\u\n\f\g\3\v\4\n\d\0\c\s\x\0\9\h\n\a\k\g\i\s\y\d\v\s\y\p\7\j\y\h\2\k\0\e\0\u\6\d\a\k\j\v\t\t\m\7\w\s\r\u\z\l\4\5\z\6\j\d\5\r\o\m\3\k\l\9\h\o\w\1\m\6\8\e\0\p\m\v\a\z\u\p\c\k\x\d\d\n\d\3\g\y\1\e\u\b\i\t\p\g\t\t\x\b\w\4\l\a\u\7\l\g\6\o\x\6\6\e\6\r\3\n\q\1\d\w\k\u\l\f\4\z\w\r\t\8\0\u\t\r\b\g\g\3\f\2\e\d\k\k\z\5 ]] 00:07:17.406 00:07:17.406 real 0m1.464s 00:07:17.406 user 0m1.032s 00:07:17.406 sys 0m0.608s 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.406 16:56:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.406 [2024-07-15 16:56:07.597431] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:17.406 [2024-07-15 16:56:07.597838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62836 ] 00:07:17.406 { 00:07:17.406 "subsystems": [ 00:07:17.406 { 00:07:17.406 "subsystem": "bdev", 00:07:17.406 "config": [ 00:07:17.406 { 00:07:17.406 "params": { 00:07:17.406 "trtype": "pcie", 00:07:17.406 "traddr": "0000:00:10.0", 00:07:17.406 "name": "Nvme0" 00:07:17.406 }, 00:07:17.406 "method": "bdev_nvme_attach_controller" 00:07:17.406 }, 00:07:17.406 { 00:07:17.406 "method": "bdev_wait_for_examine" 00:07:17.406 } 00:07:17.406 ] 00:07:17.406 } 00:07:17.406 ] 00:07:17.406 } 00:07:17.665 [2024-07-15 16:56:07.738065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.665 [2024-07-15 16:56:07.855979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.665 [2024-07-15 16:56:07.912047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.183  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:18.183 00:07:18.183 16:56:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.183 ************************************ 00:07:18.183 END TEST spdk_dd_basic_rw 00:07:18.183 ************************************ 00:07:18.183 00:07:18.183 real 0m19.391s 00:07:18.183 user 0m14.187s 00:07:18.183 sys 0m6.788s 00:07:18.183 16:56:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.183 16:56:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.183 16:56:08 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:18.183 16:56:08 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:18.183 16:56:08 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.183 16:56:08 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.183 16:56:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:18.183 ************************************ 00:07:18.183 START TEST spdk_dd_posix 00:07:18.183 ************************************ 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:18.183 * Looking for test storage... 00:07:18.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:18.183 * First test run, liburing in use 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:18.183 ************************************ 00:07:18.183 START TEST dd_flag_append 00:07:18.183 ************************************ 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=hqatj9dg0oipelqn0awky1zucxertqs1 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=kwa1996iu6ci939xwj4k849nuvp470p5 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s hqatj9dg0oipelqn0awky1zucxertqs1 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s kwa1996iu6ci939xwj4k849nuvp470p5 00:07:18.183 16:56:08 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:18.183 [2024-07-15 16:56:08.473490] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:18.183 [2024-07-15 16:56:08.473629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62900 ] 00:07:18.442 [2024-07-15 16:56:08.612632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.700 [2024-07-15 16:56:08.745758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.700 [2024-07-15 16:56:08.806238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.959  Copying: 32/32 [B] (average 31 kBps) 00:07:18.959 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ kwa1996iu6ci939xwj4k849nuvp470p5hqatj9dg0oipelqn0awky1zucxertqs1 == \k\w\a\1\9\9\6\i\u\6\c\i\9\3\9\x\w\j\4\k\8\4\9\n\u\v\p\4\7\0\p\5\h\q\a\t\j\9\d\g\0\o\i\p\e\l\q\n\0\a\w\k\y\1\z\u\c\x\e\r\t\q\s\1 ]] 00:07:18.959 00:07:18.959 real 0m0.662s 00:07:18.959 user 0m0.398s 00:07:18.959 sys 0m0.292s 00:07:18.959 ************************************ 00:07:18.959 END TEST dd_flag_append 00:07:18.959 ************************************ 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:18.959 ************************************ 00:07:18.959 START TEST dd_flag_directory 00:07:18.959 ************************************ 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.959 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.959 [2024-07-15 16:56:09.187339] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:18.960 [2024-07-15 16:56:09.187477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62934 ] 00:07:19.218 [2024-07-15 16:56:09.326493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.218 [2024-07-15 16:56:09.441832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.218 [2024-07-15 16:56:09.495951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.477 [2024-07-15 16:56:09.531350] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:19.477 [2024-07-15 16:56:09.531426] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:19.477 [2024-07-15 16:56:09.531454] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.477 [2024-07-15 16:56:09.652383] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.477 16:56:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:19.736 [2024-07-15 16:56:09.816208] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:19.736 [2024-07-15 16:56:09.816333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62938 ] 00:07:19.736 [2024-07-15 16:56:09.956595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.994 [2024-07-15 16:56:10.074407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.994 [2024-07-15 16:56:10.129575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.994 [2024-07-15 16:56:10.165022] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:19.994 [2024-07-15 16:56:10.165119] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:19.994 [2024-07-15 16:56:10.165164] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.994 [2024-07-15 16:56:10.285734] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.253 00:07:20.253 real 0m1.262s 00:07:20.253 user 0m0.741s 00:07:20.253 sys 0m0.308s 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:20.253 ************************************ 00:07:20.253 END TEST dd_flag_directory 00:07:20.253 ************************************ 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:20.253 ************************************ 00:07:20.253 START TEST dd_flag_nofollow 00:07:20.253 ************************************ 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.253 16:56:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.253 [2024-07-15 16:56:10.513685] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:20.253 [2024-07-15 16:56:10.513808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62972 ] 00:07:20.520 [2024-07-15 16:56:10.655758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.520 [2024-07-15 16:56:10.789335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.797 [2024-07-15 16:56:10.849169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.797 [2024-07-15 16:56:10.886206] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:20.797 [2024-07-15 16:56:10.886276] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:20.797 [2024-07-15 16:56:10.886305] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.797 [2024-07-15 16:56:11.007713] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.055 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:21.055 [2024-07-15 16:56:11.175878] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:21.055 [2024-07-15 16:56:11.176009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62988 ] 00:07:21.055 [2024-07-15 16:56:11.319053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.314 [2024-07-15 16:56:11.443754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.314 [2024-07-15 16:56:11.500772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.314 [2024-07-15 16:56:11.536987] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:21.314 [2024-07-15 16:56:11.537043] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:21.314 [2024-07-15 16:56:11.537059] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.573 [2024-07-15 16:56:11.660005] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.573 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:21.573 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.573 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:21.573 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:21.573 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:21.573 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.573 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:21.573 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:21.573 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:21.573 16:56:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.573 [2024-07-15 16:56:11.823473] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:21.573 [2024-07-15 16:56:11.823554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62990 ] 00:07:21.832 [2024-07-15 16:56:11.957352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.832 [2024-07-15 16:56:12.080978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.089 [2024-07-15 16:56:12.138549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.348  Copying: 512/512 [B] (average 500 kBps) 00:07:22.348 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ zfrejw2ayz8j9snth3iht22uc2uee9z6o0n31dmne27wg9a94aslvwdb0yysw3iyvv13kb4c3k51pukplmwuj3ftzgit8v5w44m55dgv5rcwh2j1z4yxas6h3w4l18rud1bw1279srlp324qhwvp4cklqaynzwaqnwfc5p7nlctvwk8md50gu0owr0tpxam2tr0jic43ls8nasfvhbwvhm9387omjy2ankblzcyyduvj78sl2xiaao4gu84v3ndc90uro3x4sht0q2ybnvwf99pdk5t0r06bou5r5gfb2y8a5v19q6cg72tc3fq9732d2rqxs7s7gf0s6duy2ig13mn39dozqytycdrt8zcg27wdwdeesjjf4jn2exetcgf7vcb70j54xu0orekdn70k69qzlqf1fc3n3o80euhw4tl37r1hnzjp6684eccn76yp51me1hjpd2bs24s0cc63ns5hmo6vnpdfbomh372rkf478aedrhiaaoi16fyui9dc == \z\f\r\e\j\w\2\a\y\z\8\j\9\s\n\t\h\3\i\h\t\2\2\u\c\2\u\e\e\9\z\6\o\0\n\3\1\d\m\n\e\2\7\w\g\9\a\9\4\a\s\l\v\w\d\b\0\y\y\s\w\3\i\y\v\v\1\3\k\b\4\c\3\k\5\1\p\u\k\p\l\m\w\u\j\3\f\t\z\g\i\t\8\v\5\w\4\4\m\5\5\d\g\v\5\r\c\w\h\2\j\1\z\4\y\x\a\s\6\h\3\w\4\l\1\8\r\u\d\1\b\w\1\2\7\9\s\r\l\p\3\2\4\q\h\w\v\p\4\c\k\l\q\a\y\n\z\w\a\q\n\w\f\c\5\p\7\n\l\c\t\v\w\k\8\m\d\5\0\g\u\0\o\w\r\0\t\p\x\a\m\2\t\r\0\j\i\c\4\3\l\s\8\n\a\s\f\v\h\b\w\v\h\m\9\3\8\7\o\m\j\y\2\a\n\k\b\l\z\c\y\y\d\u\v\j\7\8\s\l\2\x\i\a\a\o\4\g\u\8\4\v\3\n\d\c\9\0\u\r\o\3\x\4\s\h\t\0\q\2\y\b\n\v\w\f\9\9\p\d\k\5\t\0\r\0\6\b\o\u\5\r\5\g\f\b\2\y\8\a\5\v\1\9\q\6\c\g\7\2\t\c\3\f\q\9\7\3\2\d\2\r\q\x\s\7\s\7\g\f\0\s\6\d\u\y\2\i\g\1\3\m\n\3\9\d\o\z\q\y\t\y\c\d\r\t\8\z\c\g\2\7\w\d\w\d\e\e\s\j\j\f\4\j\n\2\e\x\e\t\c\g\f\7\v\c\b\7\0\j\5\4\x\u\0\o\r\e\k\d\n\7\0\k\6\9\q\z\l\q\f\1\f\c\3\n\3\o\8\0\e\u\h\w\4\t\l\3\7\r\1\h\n\z\j\p\6\6\8\4\e\c\c\n\7\6\y\p\5\1\m\e\1\h\j\p\d\2\b\s\2\4\s\0\c\c\6\3\n\s\5\h\m\o\6\v\n\p\d\f\b\o\m\h\3\7\2\r\k\f\4\7\8\a\e\d\r\h\i\a\a\o\i\1\6\f\y\u\i\9\d\c ]] 00:07:22.348 00:07:22.348 real 0m1.956s 00:07:22.348 user 0m1.152s 00:07:22.348 sys 0m0.608s 00:07:22.348 ************************************ 00:07:22.348 END TEST dd_flag_nofollow 00:07:22.348 ************************************ 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:22.348 ************************************ 00:07:22.348 START TEST dd_flag_noatime 00:07:22.348 ************************************ 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721062572 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721062572 00:07:22.348 16:56:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:23.285 16:56:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.285 [2024-07-15 16:56:13.537071] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:23.285 [2024-07-15 16:56:13.537198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63039 ] 00:07:23.544 [2024-07-15 16:56:13.677889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.544 [2024-07-15 16:56:13.811293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.806 [2024-07-15 16:56:13.867074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.066  Copying: 512/512 [B] (average 500 kBps) 00:07:24.066 00:07:24.066 16:56:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.066 16:56:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721062572 )) 00:07:24.066 16:56:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.066 16:56:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721062572 )) 00:07:24.066 16:56:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.066 [2024-07-15 16:56:14.187063] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:24.066 [2024-07-15 16:56:14.187196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63053 ] 00:07:24.066 [2024-07-15 16:56:14.324592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.325 [2024-07-15 16:56:14.442020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.325 [2024-07-15 16:56:14.496131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.584  Copying: 512/512 [B] (average 500 kBps) 00:07:24.584 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721062574 )) 00:07:24.584 00:07:24.584 real 0m2.306s 00:07:24.584 user 0m0.773s 00:07:24.584 sys 0m0.576s 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:24.584 ************************************ 00:07:24.584 END TEST dd_flag_noatime 00:07:24.584 ************************************ 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:24.584 ************************************ 00:07:24.584 START TEST dd_flags_misc 00:07:24.584 ************************************ 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:24.584 16:56:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:24.584 [2024-07-15 16:56:14.876012] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:24.584 [2024-07-15 16:56:14.876162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63081 ] 00:07:24.844 [2024-07-15 16:56:15.015190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.102 [2024-07-15 16:56:15.149170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.102 [2024-07-15 16:56:15.207988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.362  Copying: 512/512 [B] (average 500 kBps) 00:07:25.362 00:07:25.362 16:56:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 57sen6acbv51ry5bki3v729mh7md5bdt60ts48fc2v63qtc30pn7advx60tzbrsmj95dzv9q35xfz5g5kkj9hc17i2pxg96yxegdpd78brrw4mo8rdy4dzhalattjxu652yd8g5z0eycdudw3nnfkds3v1c9td2vkbmmddab652sjt331oc0wcxcymx1evhfpz943afidj6lf315sdf05y0ij6a4y3u0vkdjfo5rcawse1p7ti0og8p6g1hksmatekto744byj3v59sut13qoeoc9sqc2k31g3bqd5csed02jt2q2d2um7ebluq3ezkjlmuj4ljhdw3eq7uh9u5ju35jd32mrj20ozmrz26167ioulgwysglih08rem2su7fdp71u0g2rp287a0o41nzamkl1d25n5yjj6j0t28ab2r2bgxs23xtjqhvwu55x7i45ndk0ve1zirguivfn82cuiv80s9r9h21zky5y9gidq4j8qhtk6q6zp24njluw28p == \5\7\s\e\n\6\a\c\b\v\5\1\r\y\5\b\k\i\3\v\7\2\9\m\h\7\m\d\5\b\d\t\6\0\t\s\4\8\f\c\2\v\6\3\q\t\c\3\0\p\n\7\a\d\v\x\6\0\t\z\b\r\s\m\j\9\5\d\z\v\9\q\3\5\x\f\z\5\g\5\k\k\j\9\h\c\1\7\i\2\p\x\g\9\6\y\x\e\g\d\p\d\7\8\b\r\r\w\4\m\o\8\r\d\y\4\d\z\h\a\l\a\t\t\j\x\u\6\5\2\y\d\8\g\5\z\0\e\y\c\d\u\d\w\3\n\n\f\k\d\s\3\v\1\c\9\t\d\2\v\k\b\m\m\d\d\a\b\6\5\2\s\j\t\3\3\1\o\c\0\w\c\x\c\y\m\x\1\e\v\h\f\p\z\9\4\3\a\f\i\d\j\6\l\f\3\1\5\s\d\f\0\5\y\0\i\j\6\a\4\y\3\u\0\v\k\d\j\f\o\5\r\c\a\w\s\e\1\p\7\t\i\0\o\g\8\p\6\g\1\h\k\s\m\a\t\e\k\t\o\7\4\4\b\y\j\3\v\5\9\s\u\t\1\3\q\o\e\o\c\9\s\q\c\2\k\3\1\g\3\b\q\d\5\c\s\e\d\0\2\j\t\2\q\2\d\2\u\m\7\e\b\l\u\q\3\e\z\k\j\l\m\u\j\4\l\j\h\d\w\3\e\q\7\u\h\9\u\5\j\u\3\5\j\d\3\2\m\r\j\2\0\o\z\m\r\z\2\6\1\6\7\i\o\u\l\g\w\y\s\g\l\i\h\0\8\r\e\m\2\s\u\7\f\d\p\7\1\u\0\g\2\r\p\2\8\7\a\0\o\4\1\n\z\a\m\k\l\1\d\2\5\n\5\y\j\j\6\j\0\t\2\8\a\b\2\r\2\b\g\x\s\2\3\x\t\j\q\h\v\w\u\5\5\x\7\i\4\5\n\d\k\0\v\e\1\z\i\r\g\u\i\v\f\n\8\2\c\u\i\v\8\0\s\9\r\9\h\2\1\z\k\y\5\y\9\g\i\d\q\4\j\8\q\h\t\k\6\q\6\z\p\2\4\n\j\l\u\w\2\8\p ]] 00:07:25.362 16:56:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.362 16:56:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:25.362 [2024-07-15 16:56:15.528495] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:25.362 [2024-07-15 16:56:15.528616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63096 ] 00:07:25.621 [2024-07-15 16:56:15.666768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.621 [2024-07-15 16:56:15.781571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.621 [2024-07-15 16:56:15.836157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.880  Copying: 512/512 [B] (average 500 kBps) 00:07:25.880 00:07:25.880 16:56:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 57sen6acbv51ry5bki3v729mh7md5bdt60ts48fc2v63qtc30pn7advx60tzbrsmj95dzv9q35xfz5g5kkj9hc17i2pxg96yxegdpd78brrw4mo8rdy4dzhalattjxu652yd8g5z0eycdudw3nnfkds3v1c9td2vkbmmddab652sjt331oc0wcxcymx1evhfpz943afidj6lf315sdf05y0ij6a4y3u0vkdjfo5rcawse1p7ti0og8p6g1hksmatekto744byj3v59sut13qoeoc9sqc2k31g3bqd5csed02jt2q2d2um7ebluq3ezkjlmuj4ljhdw3eq7uh9u5ju35jd32mrj20ozmrz26167ioulgwysglih08rem2su7fdp71u0g2rp287a0o41nzamkl1d25n5yjj6j0t28ab2r2bgxs23xtjqhvwu55x7i45ndk0ve1zirguivfn82cuiv80s9r9h21zky5y9gidq4j8qhtk6q6zp24njluw28p == \5\7\s\e\n\6\a\c\b\v\5\1\r\y\5\b\k\i\3\v\7\2\9\m\h\7\m\d\5\b\d\t\6\0\t\s\4\8\f\c\2\v\6\3\q\t\c\3\0\p\n\7\a\d\v\x\6\0\t\z\b\r\s\m\j\9\5\d\z\v\9\q\3\5\x\f\z\5\g\5\k\k\j\9\h\c\1\7\i\2\p\x\g\9\6\y\x\e\g\d\p\d\7\8\b\r\r\w\4\m\o\8\r\d\y\4\d\z\h\a\l\a\t\t\j\x\u\6\5\2\y\d\8\g\5\z\0\e\y\c\d\u\d\w\3\n\n\f\k\d\s\3\v\1\c\9\t\d\2\v\k\b\m\m\d\d\a\b\6\5\2\s\j\t\3\3\1\o\c\0\w\c\x\c\y\m\x\1\e\v\h\f\p\z\9\4\3\a\f\i\d\j\6\l\f\3\1\5\s\d\f\0\5\y\0\i\j\6\a\4\y\3\u\0\v\k\d\j\f\o\5\r\c\a\w\s\e\1\p\7\t\i\0\o\g\8\p\6\g\1\h\k\s\m\a\t\e\k\t\o\7\4\4\b\y\j\3\v\5\9\s\u\t\1\3\q\o\e\o\c\9\s\q\c\2\k\3\1\g\3\b\q\d\5\c\s\e\d\0\2\j\t\2\q\2\d\2\u\m\7\e\b\l\u\q\3\e\z\k\j\l\m\u\j\4\l\j\h\d\w\3\e\q\7\u\h\9\u\5\j\u\3\5\j\d\3\2\m\r\j\2\0\o\z\m\r\z\2\6\1\6\7\i\o\u\l\g\w\y\s\g\l\i\h\0\8\r\e\m\2\s\u\7\f\d\p\7\1\u\0\g\2\r\p\2\8\7\a\0\o\4\1\n\z\a\m\k\l\1\d\2\5\n\5\y\j\j\6\j\0\t\2\8\a\b\2\r\2\b\g\x\s\2\3\x\t\j\q\h\v\w\u\5\5\x\7\i\4\5\n\d\k\0\v\e\1\z\i\r\g\u\i\v\f\n\8\2\c\u\i\v\8\0\s\9\r\9\h\2\1\z\k\y\5\y\9\g\i\d\q\4\j\8\q\h\t\k\6\q\6\z\p\2\4\n\j\l\u\w\2\8\p ]] 00:07:25.880 16:56:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.880 16:56:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:25.880 [2024-07-15 16:56:16.153935] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:25.880 [2024-07-15 16:56:16.154036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63106 ] 00:07:26.138 [2024-07-15 16:56:16.290540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.138 [2024-07-15 16:56:16.413622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.397 [2024-07-15 16:56:16.468952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.655  Copying: 512/512 [B] (average 125 kBps) 00:07:26.655 00:07:26.655 16:56:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 57sen6acbv51ry5bki3v729mh7md5bdt60ts48fc2v63qtc30pn7advx60tzbrsmj95dzv9q35xfz5g5kkj9hc17i2pxg96yxegdpd78brrw4mo8rdy4dzhalattjxu652yd8g5z0eycdudw3nnfkds3v1c9td2vkbmmddab652sjt331oc0wcxcymx1evhfpz943afidj6lf315sdf05y0ij6a4y3u0vkdjfo5rcawse1p7ti0og8p6g1hksmatekto744byj3v59sut13qoeoc9sqc2k31g3bqd5csed02jt2q2d2um7ebluq3ezkjlmuj4ljhdw3eq7uh9u5ju35jd32mrj20ozmrz26167ioulgwysglih08rem2su7fdp71u0g2rp287a0o41nzamkl1d25n5yjj6j0t28ab2r2bgxs23xtjqhvwu55x7i45ndk0ve1zirguivfn82cuiv80s9r9h21zky5y9gidq4j8qhtk6q6zp24njluw28p == \5\7\s\e\n\6\a\c\b\v\5\1\r\y\5\b\k\i\3\v\7\2\9\m\h\7\m\d\5\b\d\t\6\0\t\s\4\8\f\c\2\v\6\3\q\t\c\3\0\p\n\7\a\d\v\x\6\0\t\z\b\r\s\m\j\9\5\d\z\v\9\q\3\5\x\f\z\5\g\5\k\k\j\9\h\c\1\7\i\2\p\x\g\9\6\y\x\e\g\d\p\d\7\8\b\r\r\w\4\m\o\8\r\d\y\4\d\z\h\a\l\a\t\t\j\x\u\6\5\2\y\d\8\g\5\z\0\e\y\c\d\u\d\w\3\n\n\f\k\d\s\3\v\1\c\9\t\d\2\v\k\b\m\m\d\d\a\b\6\5\2\s\j\t\3\3\1\o\c\0\w\c\x\c\y\m\x\1\e\v\h\f\p\z\9\4\3\a\f\i\d\j\6\l\f\3\1\5\s\d\f\0\5\y\0\i\j\6\a\4\y\3\u\0\v\k\d\j\f\o\5\r\c\a\w\s\e\1\p\7\t\i\0\o\g\8\p\6\g\1\h\k\s\m\a\t\e\k\t\o\7\4\4\b\y\j\3\v\5\9\s\u\t\1\3\q\o\e\o\c\9\s\q\c\2\k\3\1\g\3\b\q\d\5\c\s\e\d\0\2\j\t\2\q\2\d\2\u\m\7\e\b\l\u\q\3\e\z\k\j\l\m\u\j\4\l\j\h\d\w\3\e\q\7\u\h\9\u\5\j\u\3\5\j\d\3\2\m\r\j\2\0\o\z\m\r\z\2\6\1\6\7\i\o\u\l\g\w\y\s\g\l\i\h\0\8\r\e\m\2\s\u\7\f\d\p\7\1\u\0\g\2\r\p\2\8\7\a\0\o\4\1\n\z\a\m\k\l\1\d\2\5\n\5\y\j\j\6\j\0\t\2\8\a\b\2\r\2\b\g\x\s\2\3\x\t\j\q\h\v\w\u\5\5\x\7\i\4\5\n\d\k\0\v\e\1\z\i\r\g\u\i\v\f\n\8\2\c\u\i\v\8\0\s\9\r\9\h\2\1\z\k\y\5\y\9\g\i\d\q\4\j\8\q\h\t\k\6\q\6\z\p\2\4\n\j\l\u\w\2\8\p ]] 00:07:26.655 16:56:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:26.655 16:56:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:26.655 [2024-07-15 16:56:16.791922] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:26.655 [2024-07-15 16:56:16.792031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63115 ] 00:07:26.655 [2024-07-15 16:56:16.932596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.913 [2024-07-15 16:56:17.049188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.913 [2024-07-15 16:56:17.104604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.171  Copying: 512/512 [B] (average 250 kBps) 00:07:27.171 00:07:27.171 16:56:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 57sen6acbv51ry5bki3v729mh7md5bdt60ts48fc2v63qtc30pn7advx60tzbrsmj95dzv9q35xfz5g5kkj9hc17i2pxg96yxegdpd78brrw4mo8rdy4dzhalattjxu652yd8g5z0eycdudw3nnfkds3v1c9td2vkbmmddab652sjt331oc0wcxcymx1evhfpz943afidj6lf315sdf05y0ij6a4y3u0vkdjfo5rcawse1p7ti0og8p6g1hksmatekto744byj3v59sut13qoeoc9sqc2k31g3bqd5csed02jt2q2d2um7ebluq3ezkjlmuj4ljhdw3eq7uh9u5ju35jd32mrj20ozmrz26167ioulgwysglih08rem2su7fdp71u0g2rp287a0o41nzamkl1d25n5yjj6j0t28ab2r2bgxs23xtjqhvwu55x7i45ndk0ve1zirguivfn82cuiv80s9r9h21zky5y9gidq4j8qhtk6q6zp24njluw28p == \5\7\s\e\n\6\a\c\b\v\5\1\r\y\5\b\k\i\3\v\7\2\9\m\h\7\m\d\5\b\d\t\6\0\t\s\4\8\f\c\2\v\6\3\q\t\c\3\0\p\n\7\a\d\v\x\6\0\t\z\b\r\s\m\j\9\5\d\z\v\9\q\3\5\x\f\z\5\g\5\k\k\j\9\h\c\1\7\i\2\p\x\g\9\6\y\x\e\g\d\p\d\7\8\b\r\r\w\4\m\o\8\r\d\y\4\d\z\h\a\l\a\t\t\j\x\u\6\5\2\y\d\8\g\5\z\0\e\y\c\d\u\d\w\3\n\n\f\k\d\s\3\v\1\c\9\t\d\2\v\k\b\m\m\d\d\a\b\6\5\2\s\j\t\3\3\1\o\c\0\w\c\x\c\y\m\x\1\e\v\h\f\p\z\9\4\3\a\f\i\d\j\6\l\f\3\1\5\s\d\f\0\5\y\0\i\j\6\a\4\y\3\u\0\v\k\d\j\f\o\5\r\c\a\w\s\e\1\p\7\t\i\0\o\g\8\p\6\g\1\h\k\s\m\a\t\e\k\t\o\7\4\4\b\y\j\3\v\5\9\s\u\t\1\3\q\o\e\o\c\9\s\q\c\2\k\3\1\g\3\b\q\d\5\c\s\e\d\0\2\j\t\2\q\2\d\2\u\m\7\e\b\l\u\q\3\e\z\k\j\l\m\u\j\4\l\j\h\d\w\3\e\q\7\u\h\9\u\5\j\u\3\5\j\d\3\2\m\r\j\2\0\o\z\m\r\z\2\6\1\6\7\i\o\u\l\g\w\y\s\g\l\i\h\0\8\r\e\m\2\s\u\7\f\d\p\7\1\u\0\g\2\r\p\2\8\7\a\0\o\4\1\n\z\a\m\k\l\1\d\2\5\n\5\y\j\j\6\j\0\t\2\8\a\b\2\r\2\b\g\x\s\2\3\x\t\j\q\h\v\w\u\5\5\x\7\i\4\5\n\d\k\0\v\e\1\z\i\r\g\u\i\v\f\n\8\2\c\u\i\v\8\0\s\9\r\9\h\2\1\z\k\y\5\y\9\g\i\d\q\4\j\8\q\h\t\k\6\q\6\z\p\2\4\n\j\l\u\w\2\8\p ]] 00:07:27.171 16:56:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:27.171 16:56:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:27.171 16:56:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:27.171 16:56:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:27.171 16:56:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:27.171 16:56:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:27.171 [2024-07-15 16:56:17.420982] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:27.171 [2024-07-15 16:56:17.421066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63125 ] 00:07:27.430 [2024-07-15 16:56:17.561578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.430 [2024-07-15 16:56:17.672156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.430 [2024-07-15 16:56:17.726894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.688  Copying: 512/512 [B] (average 500 kBps) 00:07:27.688 00:07:27.688 16:56:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mzzpsxn7ljbkgs3hdjf8x5fxto0dld1ggk8xyrw02r7kq2jx8xl6ae4ctbpghxs1gd1emknf6ouuvunco43p1w6jjerytnhn0shvl194pwpf3s2trlehe2yjahnw20srp7ma23q2m1bssr2lszp90yz1kof23mb2g9sb982qusyybm1zv0v6g8y1q8yv917eafmdrg4v8osvr20q8daz298rfzwd9rer0jli4a49x6428wk4x3ycp3dbf0plyqxfg1vh9vf1vchkgd1mfk3m2prjjladmatz1skr3c1nmbytdxn0aznezuftdwmfp0f2nvbv4ldfkea3rhf8nhsae5xnwazthxgt6p4hbeeel2i1rwulkyny79u3lmvul1vlbjbsme2m6amdssn1y0xsiwhdrxrwt676x3gtc1vlovd49pca15u4zz78dtef01qzpyqpzhftxz84g5387apre5y6dytaoao1otxyncnco66z7yttua1gzglzdadkhr01 == \m\z\z\p\s\x\n\7\l\j\b\k\g\s\3\h\d\j\f\8\x\5\f\x\t\o\0\d\l\d\1\g\g\k\8\x\y\r\w\0\2\r\7\k\q\2\j\x\8\x\l\6\a\e\4\c\t\b\p\g\h\x\s\1\g\d\1\e\m\k\n\f\6\o\u\u\v\u\n\c\o\4\3\p\1\w\6\j\j\e\r\y\t\n\h\n\0\s\h\v\l\1\9\4\p\w\p\f\3\s\2\t\r\l\e\h\e\2\y\j\a\h\n\w\2\0\s\r\p\7\m\a\2\3\q\2\m\1\b\s\s\r\2\l\s\z\p\9\0\y\z\1\k\o\f\2\3\m\b\2\g\9\s\b\9\8\2\q\u\s\y\y\b\m\1\z\v\0\v\6\g\8\y\1\q\8\y\v\9\1\7\e\a\f\m\d\r\g\4\v\8\o\s\v\r\2\0\q\8\d\a\z\2\9\8\r\f\z\w\d\9\r\e\r\0\j\l\i\4\a\4\9\x\6\4\2\8\w\k\4\x\3\y\c\p\3\d\b\f\0\p\l\y\q\x\f\g\1\v\h\9\v\f\1\v\c\h\k\g\d\1\m\f\k\3\m\2\p\r\j\j\l\a\d\m\a\t\z\1\s\k\r\3\c\1\n\m\b\y\t\d\x\n\0\a\z\n\e\z\u\f\t\d\w\m\f\p\0\f\2\n\v\b\v\4\l\d\f\k\e\a\3\r\h\f\8\n\h\s\a\e\5\x\n\w\a\z\t\h\x\g\t\6\p\4\h\b\e\e\e\l\2\i\1\r\w\u\l\k\y\n\y\7\9\u\3\l\m\v\u\l\1\v\l\b\j\b\s\m\e\2\m\6\a\m\d\s\s\n\1\y\0\x\s\i\w\h\d\r\x\r\w\t\6\7\6\x\3\g\t\c\1\v\l\o\v\d\4\9\p\c\a\1\5\u\4\z\z\7\8\d\t\e\f\0\1\q\z\p\y\q\p\z\h\f\t\x\z\8\4\g\5\3\8\7\a\p\r\e\5\y\6\d\y\t\a\o\a\o\1\o\t\x\y\n\c\n\c\o\6\6\z\7\y\t\t\u\a\1\g\z\g\l\z\d\a\d\k\h\r\0\1 ]] 00:07:27.688 16:56:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:27.688 16:56:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:27.947 [2024-07-15 16:56:18.036432] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:27.947 [2024-07-15 16:56:18.036587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63140 ] 00:07:27.947 [2024-07-15 16:56:18.175895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.206 [2024-07-15 16:56:18.281541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.206 [2024-07-15 16:56:18.334789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.464  Copying: 512/512 [B] (average 500 kBps) 00:07:28.464 00:07:28.464 16:56:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mzzpsxn7ljbkgs3hdjf8x5fxto0dld1ggk8xyrw02r7kq2jx8xl6ae4ctbpghxs1gd1emknf6ouuvunco43p1w6jjerytnhn0shvl194pwpf3s2trlehe2yjahnw20srp7ma23q2m1bssr2lszp90yz1kof23mb2g9sb982qusyybm1zv0v6g8y1q8yv917eafmdrg4v8osvr20q8daz298rfzwd9rer0jli4a49x6428wk4x3ycp3dbf0plyqxfg1vh9vf1vchkgd1mfk3m2prjjladmatz1skr3c1nmbytdxn0aznezuftdwmfp0f2nvbv4ldfkea3rhf8nhsae5xnwazthxgt6p4hbeeel2i1rwulkyny79u3lmvul1vlbjbsme2m6amdssn1y0xsiwhdrxrwt676x3gtc1vlovd49pca15u4zz78dtef01qzpyqpzhftxz84g5387apre5y6dytaoao1otxyncnco66z7yttua1gzglzdadkhr01 == \m\z\z\p\s\x\n\7\l\j\b\k\g\s\3\h\d\j\f\8\x\5\f\x\t\o\0\d\l\d\1\g\g\k\8\x\y\r\w\0\2\r\7\k\q\2\j\x\8\x\l\6\a\e\4\c\t\b\p\g\h\x\s\1\g\d\1\e\m\k\n\f\6\o\u\u\v\u\n\c\o\4\3\p\1\w\6\j\j\e\r\y\t\n\h\n\0\s\h\v\l\1\9\4\p\w\p\f\3\s\2\t\r\l\e\h\e\2\y\j\a\h\n\w\2\0\s\r\p\7\m\a\2\3\q\2\m\1\b\s\s\r\2\l\s\z\p\9\0\y\z\1\k\o\f\2\3\m\b\2\g\9\s\b\9\8\2\q\u\s\y\y\b\m\1\z\v\0\v\6\g\8\y\1\q\8\y\v\9\1\7\e\a\f\m\d\r\g\4\v\8\o\s\v\r\2\0\q\8\d\a\z\2\9\8\r\f\z\w\d\9\r\e\r\0\j\l\i\4\a\4\9\x\6\4\2\8\w\k\4\x\3\y\c\p\3\d\b\f\0\p\l\y\q\x\f\g\1\v\h\9\v\f\1\v\c\h\k\g\d\1\m\f\k\3\m\2\p\r\j\j\l\a\d\m\a\t\z\1\s\k\r\3\c\1\n\m\b\y\t\d\x\n\0\a\z\n\e\z\u\f\t\d\w\m\f\p\0\f\2\n\v\b\v\4\l\d\f\k\e\a\3\r\h\f\8\n\h\s\a\e\5\x\n\w\a\z\t\h\x\g\t\6\p\4\h\b\e\e\e\l\2\i\1\r\w\u\l\k\y\n\y\7\9\u\3\l\m\v\u\l\1\v\l\b\j\b\s\m\e\2\m\6\a\m\d\s\s\n\1\y\0\x\s\i\w\h\d\r\x\r\w\t\6\7\6\x\3\g\t\c\1\v\l\o\v\d\4\9\p\c\a\1\5\u\4\z\z\7\8\d\t\e\f\0\1\q\z\p\y\q\p\z\h\f\t\x\z\8\4\g\5\3\8\7\a\p\r\e\5\y\6\d\y\t\a\o\a\o\1\o\t\x\y\n\c\n\c\o\6\6\z\7\y\t\t\u\a\1\g\z\g\l\z\d\a\d\k\h\r\0\1 ]] 00:07:28.464 16:56:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:28.464 16:56:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:28.464 [2024-07-15 16:56:18.623682] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:28.464 [2024-07-15 16:56:18.623771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63149 ] 00:07:28.464 [2024-07-15 16:56:18.754100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.723 [2024-07-15 16:56:18.844959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.723 [2024-07-15 16:56:18.902409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.982  Copying: 512/512 [B] (average 250 kBps) 00:07:28.982 00:07:28.982 16:56:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mzzpsxn7ljbkgs3hdjf8x5fxto0dld1ggk8xyrw02r7kq2jx8xl6ae4ctbpghxs1gd1emknf6ouuvunco43p1w6jjerytnhn0shvl194pwpf3s2trlehe2yjahnw20srp7ma23q2m1bssr2lszp90yz1kof23mb2g9sb982qusyybm1zv0v6g8y1q8yv917eafmdrg4v8osvr20q8daz298rfzwd9rer0jli4a49x6428wk4x3ycp3dbf0plyqxfg1vh9vf1vchkgd1mfk3m2prjjladmatz1skr3c1nmbytdxn0aznezuftdwmfp0f2nvbv4ldfkea3rhf8nhsae5xnwazthxgt6p4hbeeel2i1rwulkyny79u3lmvul1vlbjbsme2m6amdssn1y0xsiwhdrxrwt676x3gtc1vlovd49pca15u4zz78dtef01qzpyqpzhftxz84g5387apre5y6dytaoao1otxyncnco66z7yttua1gzglzdadkhr01 == \m\z\z\p\s\x\n\7\l\j\b\k\g\s\3\h\d\j\f\8\x\5\f\x\t\o\0\d\l\d\1\g\g\k\8\x\y\r\w\0\2\r\7\k\q\2\j\x\8\x\l\6\a\e\4\c\t\b\p\g\h\x\s\1\g\d\1\e\m\k\n\f\6\o\u\u\v\u\n\c\o\4\3\p\1\w\6\j\j\e\r\y\t\n\h\n\0\s\h\v\l\1\9\4\p\w\p\f\3\s\2\t\r\l\e\h\e\2\y\j\a\h\n\w\2\0\s\r\p\7\m\a\2\3\q\2\m\1\b\s\s\r\2\l\s\z\p\9\0\y\z\1\k\o\f\2\3\m\b\2\g\9\s\b\9\8\2\q\u\s\y\y\b\m\1\z\v\0\v\6\g\8\y\1\q\8\y\v\9\1\7\e\a\f\m\d\r\g\4\v\8\o\s\v\r\2\0\q\8\d\a\z\2\9\8\r\f\z\w\d\9\r\e\r\0\j\l\i\4\a\4\9\x\6\4\2\8\w\k\4\x\3\y\c\p\3\d\b\f\0\p\l\y\q\x\f\g\1\v\h\9\v\f\1\v\c\h\k\g\d\1\m\f\k\3\m\2\p\r\j\j\l\a\d\m\a\t\z\1\s\k\r\3\c\1\n\m\b\y\t\d\x\n\0\a\z\n\e\z\u\f\t\d\w\m\f\p\0\f\2\n\v\b\v\4\l\d\f\k\e\a\3\r\h\f\8\n\h\s\a\e\5\x\n\w\a\z\t\h\x\g\t\6\p\4\h\b\e\e\e\l\2\i\1\r\w\u\l\k\y\n\y\7\9\u\3\l\m\v\u\l\1\v\l\b\j\b\s\m\e\2\m\6\a\m\d\s\s\n\1\y\0\x\s\i\w\h\d\r\x\r\w\t\6\7\6\x\3\g\t\c\1\v\l\o\v\d\4\9\p\c\a\1\5\u\4\z\z\7\8\d\t\e\f\0\1\q\z\p\y\q\p\z\h\f\t\x\z\8\4\g\5\3\8\7\a\p\r\e\5\y\6\d\y\t\a\o\a\o\1\o\t\x\y\n\c\n\c\o\6\6\z\7\y\t\t\u\a\1\g\z\g\l\z\d\a\d\k\h\r\0\1 ]] 00:07:28.982 16:56:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:28.982 16:56:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:28.982 [2024-07-15 16:56:19.213239] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:28.982 [2024-07-15 16:56:19.213329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63159 ] 00:07:29.241 [2024-07-15 16:56:19.352636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.241 [2024-07-15 16:56:19.453917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.241 [2024-07-15 16:56:19.507581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.500  Copying: 512/512 [B] (average 166 kBps) 00:07:29.500 00:07:29.500 16:56:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mzzpsxn7ljbkgs3hdjf8x5fxto0dld1ggk8xyrw02r7kq2jx8xl6ae4ctbpghxs1gd1emknf6ouuvunco43p1w6jjerytnhn0shvl194pwpf3s2trlehe2yjahnw20srp7ma23q2m1bssr2lszp90yz1kof23mb2g9sb982qusyybm1zv0v6g8y1q8yv917eafmdrg4v8osvr20q8daz298rfzwd9rer0jli4a49x6428wk4x3ycp3dbf0plyqxfg1vh9vf1vchkgd1mfk3m2prjjladmatz1skr3c1nmbytdxn0aznezuftdwmfp0f2nvbv4ldfkea3rhf8nhsae5xnwazthxgt6p4hbeeel2i1rwulkyny79u3lmvul1vlbjbsme2m6amdssn1y0xsiwhdrxrwt676x3gtc1vlovd49pca15u4zz78dtef01qzpyqpzhftxz84g5387apre5y6dytaoao1otxyncnco66z7yttua1gzglzdadkhr01 == \m\z\z\p\s\x\n\7\l\j\b\k\g\s\3\h\d\j\f\8\x\5\f\x\t\o\0\d\l\d\1\g\g\k\8\x\y\r\w\0\2\r\7\k\q\2\j\x\8\x\l\6\a\e\4\c\t\b\p\g\h\x\s\1\g\d\1\e\m\k\n\f\6\o\u\u\v\u\n\c\o\4\3\p\1\w\6\j\j\e\r\y\t\n\h\n\0\s\h\v\l\1\9\4\p\w\p\f\3\s\2\t\r\l\e\h\e\2\y\j\a\h\n\w\2\0\s\r\p\7\m\a\2\3\q\2\m\1\b\s\s\r\2\l\s\z\p\9\0\y\z\1\k\o\f\2\3\m\b\2\g\9\s\b\9\8\2\q\u\s\y\y\b\m\1\z\v\0\v\6\g\8\y\1\q\8\y\v\9\1\7\e\a\f\m\d\r\g\4\v\8\o\s\v\r\2\0\q\8\d\a\z\2\9\8\r\f\z\w\d\9\r\e\r\0\j\l\i\4\a\4\9\x\6\4\2\8\w\k\4\x\3\y\c\p\3\d\b\f\0\p\l\y\q\x\f\g\1\v\h\9\v\f\1\v\c\h\k\g\d\1\m\f\k\3\m\2\p\r\j\j\l\a\d\m\a\t\z\1\s\k\r\3\c\1\n\m\b\y\t\d\x\n\0\a\z\n\e\z\u\f\t\d\w\m\f\p\0\f\2\n\v\b\v\4\l\d\f\k\e\a\3\r\h\f\8\n\h\s\a\e\5\x\n\w\a\z\t\h\x\g\t\6\p\4\h\b\e\e\e\l\2\i\1\r\w\u\l\k\y\n\y\7\9\u\3\l\m\v\u\l\1\v\l\b\j\b\s\m\e\2\m\6\a\m\d\s\s\n\1\y\0\x\s\i\w\h\d\r\x\r\w\t\6\7\6\x\3\g\t\c\1\v\l\o\v\d\4\9\p\c\a\1\5\u\4\z\z\7\8\d\t\e\f\0\1\q\z\p\y\q\p\z\h\f\t\x\z\8\4\g\5\3\8\7\a\p\r\e\5\y\6\d\y\t\a\o\a\o\1\o\t\x\y\n\c\n\c\o\6\6\z\7\y\t\t\u\a\1\g\z\g\l\z\d\a\d\k\h\r\0\1 ]] 00:07:29.500 00:07:29.500 real 0m4.943s 00:07:29.500 user 0m2.854s 00:07:29.500 sys 0m2.277s 00:07:29.500 ************************************ 00:07:29.500 END TEST dd_flags_misc 00:07:29.500 ************************************ 00:07:29.500 16:56:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.500 16:56:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:29.759 * Second test run, disabling liburing, forcing AIO 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:29.759 ************************************ 00:07:29.759 START TEST dd_flag_append_forced_aio 00:07:29.759 ************************************ 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=nufmldv0u01haq473s7os3m2889ur0ls 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=j0pjasguun8q7cekmlvpgrki05c7q84m 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s nufmldv0u01haq473s7os3m2889ur0ls 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s j0pjasguun8q7cekmlvpgrki05c7q84m 00:07:29.759 16:56:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:29.759 [2024-07-15 16:56:19.881075] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:29.759 [2024-07-15 16:56:19.881187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63187 ] 00:07:29.759 [2024-07-15 16:56:20.022593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.018 [2024-07-15 16:56:20.145241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.018 [2024-07-15 16:56:20.203973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.277  Copying: 32/32 [B] (average 31 kBps) 00:07:30.277 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ j0pjasguun8q7cekmlvpgrki05c7q84mnufmldv0u01haq473s7os3m2889ur0ls == \j\0\p\j\a\s\g\u\u\n\8\q\7\c\e\k\m\l\v\p\g\r\k\i\0\5\c\7\q\8\4\m\n\u\f\m\l\d\v\0\u\0\1\h\a\q\4\7\3\s\7\o\s\3\m\2\8\8\9\u\r\0\l\s ]] 00:07:30.277 00:07:30.277 real 0m0.671s 00:07:30.277 user 0m0.387s 00:07:30.277 sys 0m0.159s 00:07:30.277 ************************************ 00:07:30.277 END TEST dd_flag_append_forced_aio 00:07:30.277 ************************************ 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:30.277 ************************************ 00:07:30.277 START TEST dd_flag_directory_forced_aio 00:07:30.277 ************************************ 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.277 16:56:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:30.536 [2024-07-15 16:56:20.592268] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:30.536 [2024-07-15 16:56:20.592388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63214 ] 00:07:30.536 [2024-07-15 16:56:20.732935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.795 [2024-07-15 16:56:20.873171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.795 [2024-07-15 16:56:20.929993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.795 [2024-07-15 16:56:20.964513] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:30.795 [2024-07-15 16:56:20.964580] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:30.795 [2024-07-15 16:56:20.964609] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.795 [2024-07-15 16:56:21.088892] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:31.054 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:31.054 [2024-07-15 16:56:21.245073] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:31.054 [2024-07-15 16:56:21.245172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63229 ] 00:07:31.313 [2024-07-15 16:56:21.378404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.313 [2024-07-15 16:56:21.506509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.313 [2024-07-15 16:56:21.566821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.313 [2024-07-15 16:56:21.604631] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:31.313 [2024-07-15 16:56:21.604695] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:31.313 [2024-07-15 16:56:21.604725] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.572 [2024-07-15 16:56:21.725077] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:31.572 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:31.572 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.572 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:31.572 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:31.572 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:31.572 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.572 00:07:31.572 real 0m1.296s 00:07:31.572 user 0m0.764s 00:07:31.572 sys 0m0.318s 00:07:31.572 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.572 ************************************ 00:07:31.572 END TEST dd_flag_directory_forced_aio 00:07:31.572 ************************************ 00:07:31.572 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:31.865 16:56:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:31.865 16:56:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:31.865 16:56:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.865 16:56:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.865 16:56:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:31.865 ************************************ 00:07:31.865 START TEST dd_flag_nofollow_forced_aio 00:07:31.866 ************************************ 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:31.866 16:56:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.866 [2024-07-15 16:56:21.945064] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:31.866 [2024-07-15 16:56:21.945153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63257 ] 00:07:31.866 [2024-07-15 16:56:22.086555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.139 [2024-07-15 16:56:22.219022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.139 [2024-07-15 16:56:22.278840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.139 [2024-07-15 16:56:22.315714] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:32.139 [2024-07-15 16:56:22.315777] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:32.139 [2024-07-15 16:56:22.315796] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.398 [2024-07-15 16:56:22.439389] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:32.398 16:56:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:32.398 [2024-07-15 16:56:22.627816] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:32.398 [2024-07-15 16:56:22.627956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63267 ] 00:07:32.657 [2024-07-15 16:56:22.779872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.657 [2024-07-15 16:56:22.904702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.917 [2024-07-15 16:56:22.963663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.917 [2024-07-15 16:56:22.999013] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:32.917 [2024-07-15 16:56:22.999077] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:32.917 [2024-07-15 16:56:22.999118] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.917 [2024-07-15 16:56:23.122923] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:33.176 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:33.176 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.176 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:33.176 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:33.176 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:33.176 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.176 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:33.176 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:33.176 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:33.177 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.177 [2024-07-15 16:56:23.288257] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:33.177 [2024-07-15 16:56:23.288350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63280 ] 00:07:33.177 [2024-07-15 16:56:23.427125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.434 [2024-07-15 16:56:23.536654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.434 [2024-07-15 16:56:23.596323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.691  Copying: 512/512 [B] (average 500 kBps) 00:07:33.692 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ g7rz0m00r4dlxaln8n3wrocnbwoeo2kp9jbp9por43p4g2wqxfndl797xl24k5er12mjlvajy3rqlhsqbuwzb94zs0g0ev0fdyoqsjd3m4t6hroy3456rl2tdy5ue72x6tmk6hpz8gpgdncyvcl9he3cclynowx3ithd81ewz5jv14ed44dkua6crqcyjj0gdlcvedpc5iwxbdkk67yy95jyrv8gbst7hq4r5ej1ull6rdz46pkrfw0td5we7qh0f1w7kxybf6fba38uv0mjt6heud12378f6emarpubkvulw16mgq5q3nfzdicqf5lth12sbnp48fqadnnayppi45dcs1d3012emnvf9nj6ln1mh8cvust2djdabsrznlc29eq39p3ox4xs71fmp8v2654d47a6rl0m047eouda08cuo84i1bzus1mq9bly88ca95we1pmrmm6gf0j0yfw7wxhkjrfyklu73cvztowgy30xftmzjeblu6odt6mwocty == \g\7\r\z\0\m\0\0\r\4\d\l\x\a\l\n\8\n\3\w\r\o\c\n\b\w\o\e\o\2\k\p\9\j\b\p\9\p\o\r\4\3\p\4\g\2\w\q\x\f\n\d\l\7\9\7\x\l\2\4\k\5\e\r\1\2\m\j\l\v\a\j\y\3\r\q\l\h\s\q\b\u\w\z\b\9\4\z\s\0\g\0\e\v\0\f\d\y\o\q\s\j\d\3\m\4\t\6\h\r\o\y\3\4\5\6\r\l\2\t\d\y\5\u\e\7\2\x\6\t\m\k\6\h\p\z\8\g\p\g\d\n\c\y\v\c\l\9\h\e\3\c\c\l\y\n\o\w\x\3\i\t\h\d\8\1\e\w\z\5\j\v\1\4\e\d\4\4\d\k\u\a\6\c\r\q\c\y\j\j\0\g\d\l\c\v\e\d\p\c\5\i\w\x\b\d\k\k\6\7\y\y\9\5\j\y\r\v\8\g\b\s\t\7\h\q\4\r\5\e\j\1\u\l\l\6\r\d\z\4\6\p\k\r\f\w\0\t\d\5\w\e\7\q\h\0\f\1\w\7\k\x\y\b\f\6\f\b\a\3\8\u\v\0\m\j\t\6\h\e\u\d\1\2\3\7\8\f\6\e\m\a\r\p\u\b\k\v\u\l\w\1\6\m\g\q\5\q\3\n\f\z\d\i\c\q\f\5\l\t\h\1\2\s\b\n\p\4\8\f\q\a\d\n\n\a\y\p\p\i\4\5\d\c\s\1\d\3\0\1\2\e\m\n\v\f\9\n\j\6\l\n\1\m\h\8\c\v\u\s\t\2\d\j\d\a\b\s\r\z\n\l\c\2\9\e\q\3\9\p\3\o\x\4\x\s\7\1\f\m\p\8\v\2\6\5\4\d\4\7\a\6\r\l\0\m\0\4\7\e\o\u\d\a\0\8\c\u\o\8\4\i\1\b\z\u\s\1\m\q\9\b\l\y\8\8\c\a\9\5\w\e\1\p\m\r\m\m\6\g\f\0\j\0\y\f\w\7\w\x\h\k\j\r\f\y\k\l\u\7\3\c\v\z\t\o\w\g\y\3\0\x\f\t\m\z\j\e\b\l\u\6\o\d\t\6\m\w\o\c\t\y ]] 00:07:33.692 00:07:33.692 real 0m2.001s 00:07:33.692 user 0m1.178s 00:07:33.692 sys 0m0.490s 00:07:33.692 ************************************ 00:07:33.692 END TEST dd_flag_nofollow_forced_aio 00:07:33.692 ************************************ 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:33.692 ************************************ 00:07:33.692 START TEST dd_flag_noatime_forced_aio 00:07:33.692 ************************************ 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721062583 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721062583 00:07:33.692 16:56:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:35.067 16:56:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.067 [2024-07-15 16:56:25.008445] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:35.067 [2024-07-15 16:56:25.008556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63320 ] 00:07:35.067 [2024-07-15 16:56:25.146576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.067 [2024-07-15 16:56:25.265220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.067 [2024-07-15 16:56:25.323711] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.325  Copying: 512/512 [B] (average 500 kBps) 00:07:35.325 00:07:35.325 16:56:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.325 16:56:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721062583 )) 00:07:35.325 16:56:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.584 16:56:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721062583 )) 00:07:35.584 16:56:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.584 [2024-07-15 16:56:25.678220] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:35.584 [2024-07-15 16:56:25.678326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63332 ] 00:07:35.584 [2024-07-15 16:56:25.813664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.843 [2024-07-15 16:56:25.923192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.843 [2024-07-15 16:56:25.980581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.101  Copying: 512/512 [B] (average 500 kBps) 00:07:36.101 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721062586 )) 00:07:36.101 00:07:36.101 real 0m2.352s 00:07:36.101 user 0m0.768s 00:07:36.101 sys 0m0.334s 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:36.101 ************************************ 00:07:36.101 END TEST dd_flag_noatime_forced_aio 00:07:36.101 ************************************ 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:36.101 ************************************ 00:07:36.101 START TEST dd_flags_misc_forced_aio 00:07:36.101 ************************************ 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.101 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:36.101 [2024-07-15 16:56:26.398260] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:36.101 [2024-07-15 16:56:26.398381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63364 ] 00:07:36.359 [2024-07-15 16:56:26.538744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.359 [2024-07-15 16:56:26.653289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.617 [2024-07-15 16:56:26.710845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.875  Copying: 512/512 [B] (average 500 kBps) 00:07:36.875 00:07:36.875 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0998pvj6lcsc0iklmtovyxkx1p5zuk2953e2f7809garu7tcj96znuoon2r5czv7yee9qgqsibpzdyltbagc4afb55ehroi4q7kc4omvg7fsq2te094wfrwcockbebfl71ugf1vw9443bfw2lzsuvexyh3e8adc4v6yo80s4azwi1twyzkvmgywe30zjkj8jlt70hq2nubp2xb1u3xpunwfulv7xrq7d0fthp45o80ttlhnio8h2catxmgvfab47hbgcurwgz3w0qbe8ou3l0af7076seztuu5dhdqju53qdtcmstjdt3zkrmf5jkhnu9cfgbj5c94ucuvg4axfx4kz5kypzi59t1pbwdw3kjdz7uqdriha1rhuxvcamojcuzyghprj1zu2pkxvg9gunkj61z8zrwxmbx2c1oplgm818888yzsq5fhch7hlqcv58kch427w27yik60cxgccvync59ibbi9pbach9uawg7nbyucfikslcbm5ozrkvum4g == \0\9\9\8\p\v\j\6\l\c\s\c\0\i\k\l\m\t\o\v\y\x\k\x\1\p\5\z\u\k\2\9\5\3\e\2\f\7\8\0\9\g\a\r\u\7\t\c\j\9\6\z\n\u\o\o\n\2\r\5\c\z\v\7\y\e\e\9\q\g\q\s\i\b\p\z\d\y\l\t\b\a\g\c\4\a\f\b\5\5\e\h\r\o\i\4\q\7\k\c\4\o\m\v\g\7\f\s\q\2\t\e\0\9\4\w\f\r\w\c\o\c\k\b\e\b\f\l\7\1\u\g\f\1\v\w\9\4\4\3\b\f\w\2\l\z\s\u\v\e\x\y\h\3\e\8\a\d\c\4\v\6\y\o\8\0\s\4\a\z\w\i\1\t\w\y\z\k\v\m\g\y\w\e\3\0\z\j\k\j\8\j\l\t\7\0\h\q\2\n\u\b\p\2\x\b\1\u\3\x\p\u\n\w\f\u\l\v\7\x\r\q\7\d\0\f\t\h\p\4\5\o\8\0\t\t\l\h\n\i\o\8\h\2\c\a\t\x\m\g\v\f\a\b\4\7\h\b\g\c\u\r\w\g\z\3\w\0\q\b\e\8\o\u\3\l\0\a\f\7\0\7\6\s\e\z\t\u\u\5\d\h\d\q\j\u\5\3\q\d\t\c\m\s\t\j\d\t\3\z\k\r\m\f\5\j\k\h\n\u\9\c\f\g\b\j\5\c\9\4\u\c\u\v\g\4\a\x\f\x\4\k\z\5\k\y\p\z\i\5\9\t\1\p\b\w\d\w\3\k\j\d\z\7\u\q\d\r\i\h\a\1\r\h\u\x\v\c\a\m\o\j\c\u\z\y\g\h\p\r\j\1\z\u\2\p\k\x\v\g\9\g\u\n\k\j\6\1\z\8\z\r\w\x\m\b\x\2\c\1\o\p\l\g\m\8\1\8\8\8\8\y\z\s\q\5\f\h\c\h\7\h\l\q\c\v\5\8\k\c\h\4\2\7\w\2\7\y\i\k\6\0\c\x\g\c\c\v\y\n\c\5\9\i\b\b\i\9\p\b\a\c\h\9\u\a\w\g\7\n\b\y\u\c\f\i\k\s\l\c\b\m\5\o\z\r\k\v\u\m\4\g ]] 00:07:36.875 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.875 16:56:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:36.875 [2024-07-15 16:56:27.050466] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:36.875 [2024-07-15 16:56:27.050584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63371 ] 00:07:37.134 [2024-07-15 16:56:27.187231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.134 [2024-07-15 16:56:27.294949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.134 [2024-07-15 16:56:27.350658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.392  Copying: 512/512 [B] (average 500 kBps) 00:07:37.392 00:07:37.392 16:56:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0998pvj6lcsc0iklmtovyxkx1p5zuk2953e2f7809garu7tcj96znuoon2r5czv7yee9qgqsibpzdyltbagc4afb55ehroi4q7kc4omvg7fsq2te094wfrwcockbebfl71ugf1vw9443bfw2lzsuvexyh3e8adc4v6yo80s4azwi1twyzkvmgywe30zjkj8jlt70hq2nubp2xb1u3xpunwfulv7xrq7d0fthp45o80ttlhnio8h2catxmgvfab47hbgcurwgz3w0qbe8ou3l0af7076seztuu5dhdqju53qdtcmstjdt3zkrmf5jkhnu9cfgbj5c94ucuvg4axfx4kz5kypzi59t1pbwdw3kjdz7uqdriha1rhuxvcamojcuzyghprj1zu2pkxvg9gunkj61z8zrwxmbx2c1oplgm818888yzsq5fhch7hlqcv58kch427w27yik60cxgccvync59ibbi9pbach9uawg7nbyucfikslcbm5ozrkvum4g == \0\9\9\8\p\v\j\6\l\c\s\c\0\i\k\l\m\t\o\v\y\x\k\x\1\p\5\z\u\k\2\9\5\3\e\2\f\7\8\0\9\g\a\r\u\7\t\c\j\9\6\z\n\u\o\o\n\2\r\5\c\z\v\7\y\e\e\9\q\g\q\s\i\b\p\z\d\y\l\t\b\a\g\c\4\a\f\b\5\5\e\h\r\o\i\4\q\7\k\c\4\o\m\v\g\7\f\s\q\2\t\e\0\9\4\w\f\r\w\c\o\c\k\b\e\b\f\l\7\1\u\g\f\1\v\w\9\4\4\3\b\f\w\2\l\z\s\u\v\e\x\y\h\3\e\8\a\d\c\4\v\6\y\o\8\0\s\4\a\z\w\i\1\t\w\y\z\k\v\m\g\y\w\e\3\0\z\j\k\j\8\j\l\t\7\0\h\q\2\n\u\b\p\2\x\b\1\u\3\x\p\u\n\w\f\u\l\v\7\x\r\q\7\d\0\f\t\h\p\4\5\o\8\0\t\t\l\h\n\i\o\8\h\2\c\a\t\x\m\g\v\f\a\b\4\7\h\b\g\c\u\r\w\g\z\3\w\0\q\b\e\8\o\u\3\l\0\a\f\7\0\7\6\s\e\z\t\u\u\5\d\h\d\q\j\u\5\3\q\d\t\c\m\s\t\j\d\t\3\z\k\r\m\f\5\j\k\h\n\u\9\c\f\g\b\j\5\c\9\4\u\c\u\v\g\4\a\x\f\x\4\k\z\5\k\y\p\z\i\5\9\t\1\p\b\w\d\w\3\k\j\d\z\7\u\q\d\r\i\h\a\1\r\h\u\x\v\c\a\m\o\j\c\u\z\y\g\h\p\r\j\1\z\u\2\p\k\x\v\g\9\g\u\n\k\j\6\1\z\8\z\r\w\x\m\b\x\2\c\1\o\p\l\g\m\8\1\8\8\8\8\y\z\s\q\5\f\h\c\h\7\h\l\q\c\v\5\8\k\c\h\4\2\7\w\2\7\y\i\k\6\0\c\x\g\c\c\v\y\n\c\5\9\i\b\b\i\9\p\b\a\c\h\9\u\a\w\g\7\n\b\y\u\c\f\i\k\s\l\c\b\m\5\o\z\r\k\v\u\m\4\g ]] 00:07:37.392 16:56:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:37.392 16:56:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:37.392 [2024-07-15 16:56:27.664325] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:37.392 [2024-07-15 16:56:27.664439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63379 ] 00:07:37.713 [2024-07-15 16:56:27.800515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.713 [2024-07-15 16:56:27.876864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.713 [2024-07-15 16:56:27.931740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.971  Copying: 512/512 [B] (average 500 kBps) 00:07:37.971 00:07:37.971 16:56:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0998pvj6lcsc0iklmtovyxkx1p5zuk2953e2f7809garu7tcj96znuoon2r5czv7yee9qgqsibpzdyltbagc4afb55ehroi4q7kc4omvg7fsq2te094wfrwcockbebfl71ugf1vw9443bfw2lzsuvexyh3e8adc4v6yo80s4azwi1twyzkvmgywe30zjkj8jlt70hq2nubp2xb1u3xpunwfulv7xrq7d0fthp45o80ttlhnio8h2catxmgvfab47hbgcurwgz3w0qbe8ou3l0af7076seztuu5dhdqju53qdtcmstjdt3zkrmf5jkhnu9cfgbj5c94ucuvg4axfx4kz5kypzi59t1pbwdw3kjdz7uqdriha1rhuxvcamojcuzyghprj1zu2pkxvg9gunkj61z8zrwxmbx2c1oplgm818888yzsq5fhch7hlqcv58kch427w27yik60cxgccvync59ibbi9pbach9uawg7nbyucfikslcbm5ozrkvum4g == \0\9\9\8\p\v\j\6\l\c\s\c\0\i\k\l\m\t\o\v\y\x\k\x\1\p\5\z\u\k\2\9\5\3\e\2\f\7\8\0\9\g\a\r\u\7\t\c\j\9\6\z\n\u\o\o\n\2\r\5\c\z\v\7\y\e\e\9\q\g\q\s\i\b\p\z\d\y\l\t\b\a\g\c\4\a\f\b\5\5\e\h\r\o\i\4\q\7\k\c\4\o\m\v\g\7\f\s\q\2\t\e\0\9\4\w\f\r\w\c\o\c\k\b\e\b\f\l\7\1\u\g\f\1\v\w\9\4\4\3\b\f\w\2\l\z\s\u\v\e\x\y\h\3\e\8\a\d\c\4\v\6\y\o\8\0\s\4\a\z\w\i\1\t\w\y\z\k\v\m\g\y\w\e\3\0\z\j\k\j\8\j\l\t\7\0\h\q\2\n\u\b\p\2\x\b\1\u\3\x\p\u\n\w\f\u\l\v\7\x\r\q\7\d\0\f\t\h\p\4\5\o\8\0\t\t\l\h\n\i\o\8\h\2\c\a\t\x\m\g\v\f\a\b\4\7\h\b\g\c\u\r\w\g\z\3\w\0\q\b\e\8\o\u\3\l\0\a\f\7\0\7\6\s\e\z\t\u\u\5\d\h\d\q\j\u\5\3\q\d\t\c\m\s\t\j\d\t\3\z\k\r\m\f\5\j\k\h\n\u\9\c\f\g\b\j\5\c\9\4\u\c\u\v\g\4\a\x\f\x\4\k\z\5\k\y\p\z\i\5\9\t\1\p\b\w\d\w\3\k\j\d\z\7\u\q\d\r\i\h\a\1\r\h\u\x\v\c\a\m\o\j\c\u\z\y\g\h\p\r\j\1\z\u\2\p\k\x\v\g\9\g\u\n\k\j\6\1\z\8\z\r\w\x\m\b\x\2\c\1\o\p\l\g\m\8\1\8\8\8\8\y\z\s\q\5\f\h\c\h\7\h\l\q\c\v\5\8\k\c\h\4\2\7\w\2\7\y\i\k\6\0\c\x\g\c\c\v\y\n\c\5\9\i\b\b\i\9\p\b\a\c\h\9\u\a\w\g\7\n\b\y\u\c\f\i\k\s\l\c\b\m\5\o\z\r\k\v\u\m\4\g ]] 00:07:37.972 16:56:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:37.972 16:56:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:37.972 [2024-07-15 16:56:28.244895] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:37.972 [2024-07-15 16:56:28.245004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63392 ] 00:07:38.230 [2024-07-15 16:56:28.382917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.230 [2024-07-15 16:56:28.489565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.489 [2024-07-15 16:56:28.543466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.748  Copying: 512/512 [B] (average 500 kBps) 00:07:38.748 00:07:38.748 16:56:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0998pvj6lcsc0iklmtovyxkx1p5zuk2953e2f7809garu7tcj96znuoon2r5czv7yee9qgqsibpzdyltbagc4afb55ehroi4q7kc4omvg7fsq2te094wfrwcockbebfl71ugf1vw9443bfw2lzsuvexyh3e8adc4v6yo80s4azwi1twyzkvmgywe30zjkj8jlt70hq2nubp2xb1u3xpunwfulv7xrq7d0fthp45o80ttlhnio8h2catxmgvfab47hbgcurwgz3w0qbe8ou3l0af7076seztuu5dhdqju53qdtcmstjdt3zkrmf5jkhnu9cfgbj5c94ucuvg4axfx4kz5kypzi59t1pbwdw3kjdz7uqdriha1rhuxvcamojcuzyghprj1zu2pkxvg9gunkj61z8zrwxmbx2c1oplgm818888yzsq5fhch7hlqcv58kch427w27yik60cxgccvync59ibbi9pbach9uawg7nbyucfikslcbm5ozrkvum4g == \0\9\9\8\p\v\j\6\l\c\s\c\0\i\k\l\m\t\o\v\y\x\k\x\1\p\5\z\u\k\2\9\5\3\e\2\f\7\8\0\9\g\a\r\u\7\t\c\j\9\6\z\n\u\o\o\n\2\r\5\c\z\v\7\y\e\e\9\q\g\q\s\i\b\p\z\d\y\l\t\b\a\g\c\4\a\f\b\5\5\e\h\r\o\i\4\q\7\k\c\4\o\m\v\g\7\f\s\q\2\t\e\0\9\4\w\f\r\w\c\o\c\k\b\e\b\f\l\7\1\u\g\f\1\v\w\9\4\4\3\b\f\w\2\l\z\s\u\v\e\x\y\h\3\e\8\a\d\c\4\v\6\y\o\8\0\s\4\a\z\w\i\1\t\w\y\z\k\v\m\g\y\w\e\3\0\z\j\k\j\8\j\l\t\7\0\h\q\2\n\u\b\p\2\x\b\1\u\3\x\p\u\n\w\f\u\l\v\7\x\r\q\7\d\0\f\t\h\p\4\5\o\8\0\t\t\l\h\n\i\o\8\h\2\c\a\t\x\m\g\v\f\a\b\4\7\h\b\g\c\u\r\w\g\z\3\w\0\q\b\e\8\o\u\3\l\0\a\f\7\0\7\6\s\e\z\t\u\u\5\d\h\d\q\j\u\5\3\q\d\t\c\m\s\t\j\d\t\3\z\k\r\m\f\5\j\k\h\n\u\9\c\f\g\b\j\5\c\9\4\u\c\u\v\g\4\a\x\f\x\4\k\z\5\k\y\p\z\i\5\9\t\1\p\b\w\d\w\3\k\j\d\z\7\u\q\d\r\i\h\a\1\r\h\u\x\v\c\a\m\o\j\c\u\z\y\g\h\p\r\j\1\z\u\2\p\k\x\v\g\9\g\u\n\k\j\6\1\z\8\z\r\w\x\m\b\x\2\c\1\o\p\l\g\m\8\1\8\8\8\8\y\z\s\q\5\f\h\c\h\7\h\l\q\c\v\5\8\k\c\h\4\2\7\w\2\7\y\i\k\6\0\c\x\g\c\c\v\y\n\c\5\9\i\b\b\i\9\p\b\a\c\h\9\u\a\w\g\7\n\b\y\u\c\f\i\k\s\l\c\b\m\5\o\z\r\k\v\u\m\4\g ]] 00:07:38.748 16:56:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:38.748 16:56:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:38.748 16:56:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:38.748 16:56:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:38.748 16:56:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:38.748 16:56:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:38.748 [2024-07-15 16:56:28.868145] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:38.748 [2024-07-15 16:56:28.868251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63394 ] 00:07:38.748 [2024-07-15 16:56:29.005470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.007 [2024-07-15 16:56:29.105709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.007 [2024-07-15 16:56:29.160973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.266  Copying: 512/512 [B] (average 500 kBps) 00:07:39.266 00:07:39.266 16:56:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f74qk56zmk82sdh2nflikwmytno0qw7wj31gd6e9laaxwbc1bvk14ov9ewez8qrn1d1u0eflqtth0ig4wpzd3fcb5mlalit0o1gwed77odt4chi82k5an3ywwfvre5gq44ub4pdb1tnfvzpbhvjxsshl4j8iu3l6finpicwqggk8u60ky0x2p47yw991xya8lyll1la9b7d1tgu2ws4dmbx9zvnasbryxuwnhjgjxs46c4vj7z3a9iba5cm7q5eem0z7sc6jjdvydnlo4kdefdjdutux15iieqlggnrhg6e22hijy49sqv6pp4k3hsjah2a3muyue1ilr82m0nxqtss65czgye6m4eubbyp5hgmrubfmeexpuz1huhtuxwgpi712816y7v0qu6m3c2ddon85yqt7jjtlyove3ev5tkiotfm9dx8j1pguh5f54qde6chvuta30br0a8nx1sfe7qb1ve54u4g5fwju5oue65opbmy9xu3uxxgsx2p5rann == \f\7\4\q\k\5\6\z\m\k\8\2\s\d\h\2\n\f\l\i\k\w\m\y\t\n\o\0\q\w\7\w\j\3\1\g\d\6\e\9\l\a\a\x\w\b\c\1\b\v\k\1\4\o\v\9\e\w\e\z\8\q\r\n\1\d\1\u\0\e\f\l\q\t\t\h\0\i\g\4\w\p\z\d\3\f\c\b\5\m\l\a\l\i\t\0\o\1\g\w\e\d\7\7\o\d\t\4\c\h\i\8\2\k\5\a\n\3\y\w\w\f\v\r\e\5\g\q\4\4\u\b\4\p\d\b\1\t\n\f\v\z\p\b\h\v\j\x\s\s\h\l\4\j\8\i\u\3\l\6\f\i\n\p\i\c\w\q\g\g\k\8\u\6\0\k\y\0\x\2\p\4\7\y\w\9\9\1\x\y\a\8\l\y\l\l\1\l\a\9\b\7\d\1\t\g\u\2\w\s\4\d\m\b\x\9\z\v\n\a\s\b\r\y\x\u\w\n\h\j\g\j\x\s\4\6\c\4\v\j\7\z\3\a\9\i\b\a\5\c\m\7\q\5\e\e\m\0\z\7\s\c\6\j\j\d\v\y\d\n\l\o\4\k\d\e\f\d\j\d\u\t\u\x\1\5\i\i\e\q\l\g\g\n\r\h\g\6\e\2\2\h\i\j\y\4\9\s\q\v\6\p\p\4\k\3\h\s\j\a\h\2\a\3\m\u\y\u\e\1\i\l\r\8\2\m\0\n\x\q\t\s\s\6\5\c\z\g\y\e\6\m\4\e\u\b\b\y\p\5\h\g\m\r\u\b\f\m\e\e\x\p\u\z\1\h\u\h\t\u\x\w\g\p\i\7\1\2\8\1\6\y\7\v\0\q\u\6\m\3\c\2\d\d\o\n\8\5\y\q\t\7\j\j\t\l\y\o\v\e\3\e\v\5\t\k\i\o\t\f\m\9\d\x\8\j\1\p\g\u\h\5\f\5\4\q\d\e\6\c\h\v\u\t\a\3\0\b\r\0\a\8\n\x\1\s\f\e\7\q\b\1\v\e\5\4\u\4\g\5\f\w\j\u\5\o\u\e\6\5\o\p\b\m\y\9\x\u\3\u\x\x\g\s\x\2\p\5\r\a\n\n ]] 00:07:39.266 16:56:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.267 16:56:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:39.267 [2024-07-15 16:56:29.488911] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:39.267 [2024-07-15 16:56:29.489003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63407 ] 00:07:39.525 [2024-07-15 16:56:29.628701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.525 [2024-07-15 16:56:29.735006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.525 [2024-07-15 16:56:29.790028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.784  Copying: 512/512 [B] (average 500 kBps) 00:07:39.784 00:07:39.784 16:56:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f74qk56zmk82sdh2nflikwmytno0qw7wj31gd6e9laaxwbc1bvk14ov9ewez8qrn1d1u0eflqtth0ig4wpzd3fcb5mlalit0o1gwed77odt4chi82k5an3ywwfvre5gq44ub4pdb1tnfvzpbhvjxsshl4j8iu3l6finpicwqggk8u60ky0x2p47yw991xya8lyll1la9b7d1tgu2ws4dmbx9zvnasbryxuwnhjgjxs46c4vj7z3a9iba5cm7q5eem0z7sc6jjdvydnlo4kdefdjdutux15iieqlggnrhg6e22hijy49sqv6pp4k3hsjah2a3muyue1ilr82m0nxqtss65czgye6m4eubbyp5hgmrubfmeexpuz1huhtuxwgpi712816y7v0qu6m3c2ddon85yqt7jjtlyove3ev5tkiotfm9dx8j1pguh5f54qde6chvuta30br0a8nx1sfe7qb1ve54u4g5fwju5oue65opbmy9xu3uxxgsx2p5rann == \f\7\4\q\k\5\6\z\m\k\8\2\s\d\h\2\n\f\l\i\k\w\m\y\t\n\o\0\q\w\7\w\j\3\1\g\d\6\e\9\l\a\a\x\w\b\c\1\b\v\k\1\4\o\v\9\e\w\e\z\8\q\r\n\1\d\1\u\0\e\f\l\q\t\t\h\0\i\g\4\w\p\z\d\3\f\c\b\5\m\l\a\l\i\t\0\o\1\g\w\e\d\7\7\o\d\t\4\c\h\i\8\2\k\5\a\n\3\y\w\w\f\v\r\e\5\g\q\4\4\u\b\4\p\d\b\1\t\n\f\v\z\p\b\h\v\j\x\s\s\h\l\4\j\8\i\u\3\l\6\f\i\n\p\i\c\w\q\g\g\k\8\u\6\0\k\y\0\x\2\p\4\7\y\w\9\9\1\x\y\a\8\l\y\l\l\1\l\a\9\b\7\d\1\t\g\u\2\w\s\4\d\m\b\x\9\z\v\n\a\s\b\r\y\x\u\w\n\h\j\g\j\x\s\4\6\c\4\v\j\7\z\3\a\9\i\b\a\5\c\m\7\q\5\e\e\m\0\z\7\s\c\6\j\j\d\v\y\d\n\l\o\4\k\d\e\f\d\j\d\u\t\u\x\1\5\i\i\e\q\l\g\g\n\r\h\g\6\e\2\2\h\i\j\y\4\9\s\q\v\6\p\p\4\k\3\h\s\j\a\h\2\a\3\m\u\y\u\e\1\i\l\r\8\2\m\0\n\x\q\t\s\s\6\5\c\z\g\y\e\6\m\4\e\u\b\b\y\p\5\h\g\m\r\u\b\f\m\e\e\x\p\u\z\1\h\u\h\t\u\x\w\g\p\i\7\1\2\8\1\6\y\7\v\0\q\u\6\m\3\c\2\d\d\o\n\8\5\y\q\t\7\j\j\t\l\y\o\v\e\3\e\v\5\t\k\i\o\t\f\m\9\d\x\8\j\1\p\g\u\h\5\f\5\4\q\d\e\6\c\h\v\u\t\a\3\0\b\r\0\a\8\n\x\1\s\f\e\7\q\b\1\v\e\5\4\u\4\g\5\f\w\j\u\5\o\u\e\6\5\o\p\b\m\y\9\x\u\3\u\x\x\g\s\x\2\p\5\r\a\n\n ]] 00:07:39.784 16:56:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.784 16:56:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:40.047 [2024-07-15 16:56:30.101338] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:40.047 [2024-07-15 16:56:30.101438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63409 ] 00:07:40.047 [2024-07-15 16:56:30.236413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.307 [2024-07-15 16:56:30.364210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.307 [2024-07-15 16:56:30.420927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.566  Copying: 512/512 [B] (average 166 kBps) 00:07:40.566 00:07:40.566 16:56:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f74qk56zmk82sdh2nflikwmytno0qw7wj31gd6e9laaxwbc1bvk14ov9ewez8qrn1d1u0eflqtth0ig4wpzd3fcb5mlalit0o1gwed77odt4chi82k5an3ywwfvre5gq44ub4pdb1tnfvzpbhvjxsshl4j8iu3l6finpicwqggk8u60ky0x2p47yw991xya8lyll1la9b7d1tgu2ws4dmbx9zvnasbryxuwnhjgjxs46c4vj7z3a9iba5cm7q5eem0z7sc6jjdvydnlo4kdefdjdutux15iieqlggnrhg6e22hijy49sqv6pp4k3hsjah2a3muyue1ilr82m0nxqtss65czgye6m4eubbyp5hgmrubfmeexpuz1huhtuxwgpi712816y7v0qu6m3c2ddon85yqt7jjtlyove3ev5tkiotfm9dx8j1pguh5f54qde6chvuta30br0a8nx1sfe7qb1ve54u4g5fwju5oue65opbmy9xu3uxxgsx2p5rann == \f\7\4\q\k\5\6\z\m\k\8\2\s\d\h\2\n\f\l\i\k\w\m\y\t\n\o\0\q\w\7\w\j\3\1\g\d\6\e\9\l\a\a\x\w\b\c\1\b\v\k\1\4\o\v\9\e\w\e\z\8\q\r\n\1\d\1\u\0\e\f\l\q\t\t\h\0\i\g\4\w\p\z\d\3\f\c\b\5\m\l\a\l\i\t\0\o\1\g\w\e\d\7\7\o\d\t\4\c\h\i\8\2\k\5\a\n\3\y\w\w\f\v\r\e\5\g\q\4\4\u\b\4\p\d\b\1\t\n\f\v\z\p\b\h\v\j\x\s\s\h\l\4\j\8\i\u\3\l\6\f\i\n\p\i\c\w\q\g\g\k\8\u\6\0\k\y\0\x\2\p\4\7\y\w\9\9\1\x\y\a\8\l\y\l\l\1\l\a\9\b\7\d\1\t\g\u\2\w\s\4\d\m\b\x\9\z\v\n\a\s\b\r\y\x\u\w\n\h\j\g\j\x\s\4\6\c\4\v\j\7\z\3\a\9\i\b\a\5\c\m\7\q\5\e\e\m\0\z\7\s\c\6\j\j\d\v\y\d\n\l\o\4\k\d\e\f\d\j\d\u\t\u\x\1\5\i\i\e\q\l\g\g\n\r\h\g\6\e\2\2\h\i\j\y\4\9\s\q\v\6\p\p\4\k\3\h\s\j\a\h\2\a\3\m\u\y\u\e\1\i\l\r\8\2\m\0\n\x\q\t\s\s\6\5\c\z\g\y\e\6\m\4\e\u\b\b\y\p\5\h\g\m\r\u\b\f\m\e\e\x\p\u\z\1\h\u\h\t\u\x\w\g\p\i\7\1\2\8\1\6\y\7\v\0\q\u\6\m\3\c\2\d\d\o\n\8\5\y\q\t\7\j\j\t\l\y\o\v\e\3\e\v\5\t\k\i\o\t\f\m\9\d\x\8\j\1\p\g\u\h\5\f\5\4\q\d\e\6\c\h\v\u\t\a\3\0\b\r\0\a\8\n\x\1\s\f\e\7\q\b\1\v\e\5\4\u\4\g\5\f\w\j\u\5\o\u\e\6\5\o\p\b\m\y\9\x\u\3\u\x\x\g\s\x\2\p\5\r\a\n\n ]] 00:07:40.566 16:56:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:40.566 16:56:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:40.566 [2024-07-15 16:56:30.760170] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:40.566 [2024-07-15 16:56:30.760263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63422 ] 00:07:40.824 [2024-07-15 16:56:30.895208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.824 [2024-07-15 16:56:31.001719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.824 [2024-07-15 16:56:31.056688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.083  Copying: 512/512 [B] (average 500 kBps) 00:07:41.083 00:07:41.083 16:56:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f74qk56zmk82sdh2nflikwmytno0qw7wj31gd6e9laaxwbc1bvk14ov9ewez8qrn1d1u0eflqtth0ig4wpzd3fcb5mlalit0o1gwed77odt4chi82k5an3ywwfvre5gq44ub4pdb1tnfvzpbhvjxsshl4j8iu3l6finpicwqggk8u60ky0x2p47yw991xya8lyll1la9b7d1tgu2ws4dmbx9zvnasbryxuwnhjgjxs46c4vj7z3a9iba5cm7q5eem0z7sc6jjdvydnlo4kdefdjdutux15iieqlggnrhg6e22hijy49sqv6pp4k3hsjah2a3muyue1ilr82m0nxqtss65czgye6m4eubbyp5hgmrubfmeexpuz1huhtuxwgpi712816y7v0qu6m3c2ddon85yqt7jjtlyove3ev5tkiotfm9dx8j1pguh5f54qde6chvuta30br0a8nx1sfe7qb1ve54u4g5fwju5oue65opbmy9xu3uxxgsx2p5rann == \f\7\4\q\k\5\6\z\m\k\8\2\s\d\h\2\n\f\l\i\k\w\m\y\t\n\o\0\q\w\7\w\j\3\1\g\d\6\e\9\l\a\a\x\w\b\c\1\b\v\k\1\4\o\v\9\e\w\e\z\8\q\r\n\1\d\1\u\0\e\f\l\q\t\t\h\0\i\g\4\w\p\z\d\3\f\c\b\5\m\l\a\l\i\t\0\o\1\g\w\e\d\7\7\o\d\t\4\c\h\i\8\2\k\5\a\n\3\y\w\w\f\v\r\e\5\g\q\4\4\u\b\4\p\d\b\1\t\n\f\v\z\p\b\h\v\j\x\s\s\h\l\4\j\8\i\u\3\l\6\f\i\n\p\i\c\w\q\g\g\k\8\u\6\0\k\y\0\x\2\p\4\7\y\w\9\9\1\x\y\a\8\l\y\l\l\1\l\a\9\b\7\d\1\t\g\u\2\w\s\4\d\m\b\x\9\z\v\n\a\s\b\r\y\x\u\w\n\h\j\g\j\x\s\4\6\c\4\v\j\7\z\3\a\9\i\b\a\5\c\m\7\q\5\e\e\m\0\z\7\s\c\6\j\j\d\v\y\d\n\l\o\4\k\d\e\f\d\j\d\u\t\u\x\1\5\i\i\e\q\l\g\g\n\r\h\g\6\e\2\2\h\i\j\y\4\9\s\q\v\6\p\p\4\k\3\h\s\j\a\h\2\a\3\m\u\y\u\e\1\i\l\r\8\2\m\0\n\x\q\t\s\s\6\5\c\z\g\y\e\6\m\4\e\u\b\b\y\p\5\h\g\m\r\u\b\f\m\e\e\x\p\u\z\1\h\u\h\t\u\x\w\g\p\i\7\1\2\8\1\6\y\7\v\0\q\u\6\m\3\c\2\d\d\o\n\8\5\y\q\t\7\j\j\t\l\y\o\v\e\3\e\v\5\t\k\i\o\t\f\m\9\d\x\8\j\1\p\g\u\h\5\f\5\4\q\d\e\6\c\h\v\u\t\a\3\0\b\r\0\a\8\n\x\1\s\f\e\7\q\b\1\v\e\5\4\u\4\g\5\f\w\j\u\5\o\u\e\6\5\o\p\b\m\y\9\x\u\3\u\x\x\g\s\x\2\p\5\r\a\n\n ]] 00:07:41.083 00:07:41.083 real 0m4.986s 00:07:41.083 user 0m2.788s 00:07:41.083 sys 0m1.208s 00:07:41.083 ************************************ 00:07:41.083 END TEST dd_flags_misc_forced_aio 00:07:41.083 ************************************ 00:07:41.083 16:56:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.083 16:56:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:41.083 16:56:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:41.083 16:56:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:41.083 16:56:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:41.083 16:56:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:41.083 00:07:41.083 real 0m23.060s 00:07:41.083 user 0m11.998s 00:07:41.083 sys 0m6.952s 00:07:41.083 16:56:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.083 16:56:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:41.083 ************************************ 00:07:41.083 END TEST spdk_dd_posix 00:07:41.083 ************************************ 00:07:41.341 16:56:31 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:41.341 16:56:31 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:41.341 16:56:31 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.341 16:56:31 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.341 16:56:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:41.341 ************************************ 00:07:41.341 START TEST spdk_dd_malloc 00:07:41.341 ************************************ 00:07:41.341 16:56:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:41.341 * Looking for test storage... 00:07:41.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:41.341 16:56:31 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:41.341 16:56:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.341 16:56:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:41.342 ************************************ 00:07:41.342 START TEST dd_malloc_copy 00:07:41.342 ************************************ 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:41.342 16:56:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:41.342 [2024-07-15 16:56:31.566756] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:41.342 [2024-07-15 16:56:31.566858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63496 ] 00:07:41.342 { 00:07:41.342 "subsystems": [ 00:07:41.342 { 00:07:41.342 "subsystem": "bdev", 00:07:41.342 "config": [ 00:07:41.342 { 00:07:41.342 "params": { 00:07:41.342 "block_size": 512, 00:07:41.342 "num_blocks": 1048576, 00:07:41.342 "name": "malloc0" 00:07:41.342 }, 00:07:41.342 "method": "bdev_malloc_create" 00:07:41.342 }, 00:07:41.342 { 00:07:41.342 "params": { 00:07:41.342 "block_size": 512, 00:07:41.342 "num_blocks": 1048576, 00:07:41.342 "name": "malloc1" 00:07:41.342 }, 00:07:41.342 "method": "bdev_malloc_create" 00:07:41.342 }, 00:07:41.342 { 00:07:41.342 "method": "bdev_wait_for_examine" 00:07:41.342 } 00:07:41.342 ] 00:07:41.342 } 00:07:41.342 ] 00:07:41.342 } 00:07:41.600 [2024-07-15 16:56:31.704625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.600 [2024-07-15 16:56:31.819604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.600 [2024-07-15 16:56:31.874031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.260  Copying: 196/512 [MB] (196 MBps) Copying: 384/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 193 MBps) 00:07:45.260 00:07:45.260 16:56:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:45.261 16:56:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:45.261 16:56:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:45.261 16:56:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:45.261 [2024-07-15 16:56:35.504010] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:45.261 [2024-07-15 16:56:35.504118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63548 ] 00:07:45.261 { 00:07:45.261 "subsystems": [ 00:07:45.261 { 00:07:45.261 "subsystem": "bdev", 00:07:45.261 "config": [ 00:07:45.261 { 00:07:45.261 "params": { 00:07:45.261 "block_size": 512, 00:07:45.261 "num_blocks": 1048576, 00:07:45.261 "name": "malloc0" 00:07:45.261 }, 00:07:45.261 "method": "bdev_malloc_create" 00:07:45.261 }, 00:07:45.261 { 00:07:45.261 "params": { 00:07:45.261 "block_size": 512, 00:07:45.261 "num_blocks": 1048576, 00:07:45.261 "name": "malloc1" 00:07:45.261 }, 00:07:45.261 "method": "bdev_malloc_create" 00:07:45.261 }, 00:07:45.261 { 00:07:45.261 "method": "bdev_wait_for_examine" 00:07:45.261 } 00:07:45.261 ] 00:07:45.261 } 00:07:45.261 ] 00:07:45.261 } 00:07:45.517 [2024-07-15 16:56:35.643046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.517 [2024-07-15 16:56:35.754807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.518 [2024-07-15 16:56:35.807683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.039  Copying: 197/512 [MB] (197 MBps) Copying: 395/512 [MB] (198 MBps) Copying: 512/512 [MB] (average 197 MBps) 00:07:49.039 00:07:49.039 00:07:49.039 real 0m7.796s 00:07:49.039 user 0m6.818s 00:07:49.039 sys 0m0.815s 00:07:49.039 16:56:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.039 ************************************ 00:07:49.039 16:56:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:49.039 END TEST dd_malloc_copy 00:07:49.039 ************************************ 00:07:49.298 16:56:39 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:07:49.298 00:07:49.298 real 0m7.931s 00:07:49.298 user 0m6.868s 00:07:49.298 sys 0m0.902s 00:07:49.298 16:56:39 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.298 16:56:39 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:49.298 ************************************ 00:07:49.298 END TEST spdk_dd_malloc 00:07:49.298 ************************************ 00:07:49.298 16:56:39 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:49.298 16:56:39 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:49.298 16:56:39 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:49.298 16:56:39 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.298 16:56:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:49.298 ************************************ 00:07:49.298 START TEST spdk_dd_bdev_to_bdev 00:07:49.298 ************************************ 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:49.298 * Looking for test storage... 00:07:49.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:49.298 ************************************ 00:07:49.298 START TEST dd_inflate_file 00:07:49.298 ************************************ 00:07:49.298 16:56:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:49.298 [2024-07-15 16:56:39.555481] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:49.298 [2024-07-15 16:56:39.555597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63654 ] 00:07:49.557 [2024-07-15 16:56:39.690341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.557 [2024-07-15 16:56:39.802767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.816 [2024-07-15 16:56:39.860274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.075  Copying: 64/64 [MB] (average 1684 MBps) 00:07:50.075 00:07:50.075 00:07:50.075 real 0m0.661s 00:07:50.075 user 0m0.408s 00:07:50.075 sys 0m0.303s 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.075 ************************************ 00:07:50.075 END TEST dd_inflate_file 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:50.075 ************************************ 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:50.075 ************************************ 00:07:50.075 START TEST dd_copy_to_out_bdev 00:07:50.075 ************************************ 00:07:50.075 16:56:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:50.075 [2024-07-15 16:56:40.276677] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:50.075 [2024-07-15 16:56:40.276780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63687 ] 00:07:50.075 { 00:07:50.075 "subsystems": [ 00:07:50.075 { 00:07:50.075 "subsystem": "bdev", 00:07:50.075 "config": [ 00:07:50.075 { 00:07:50.075 "params": { 00:07:50.075 "trtype": "pcie", 00:07:50.075 "traddr": "0000:00:10.0", 00:07:50.075 "name": "Nvme0" 00:07:50.075 }, 00:07:50.075 "method": "bdev_nvme_attach_controller" 00:07:50.075 }, 00:07:50.075 { 00:07:50.075 "params": { 00:07:50.075 "trtype": "pcie", 00:07:50.075 "traddr": "0000:00:11.0", 00:07:50.075 "name": "Nvme1" 00:07:50.075 }, 00:07:50.075 "method": "bdev_nvme_attach_controller" 00:07:50.075 }, 00:07:50.075 { 00:07:50.075 "method": "bdev_wait_for_examine" 00:07:50.075 } 00:07:50.075 ] 00:07:50.075 } 00:07:50.075 ] 00:07:50.075 } 00:07:50.334 [2024-07-15 16:56:40.416905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.334 [2024-07-15 16:56:40.535161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.334 [2024-07-15 16:56:40.592462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.005  Copying: 57/64 [MB] (57 MBps) Copying: 64/64 [MB] (average 57 MBps) 00:07:52.005 00:07:52.005 00:07:52.005 real 0m1.909s 00:07:52.005 user 0m1.662s 00:07:52.005 sys 0m1.482s 00:07:52.005 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.005 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:52.005 ************************************ 00:07:52.005 END TEST dd_copy_to_out_bdev 00:07:52.005 ************************************ 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:52.006 ************************************ 00:07:52.006 START TEST dd_offset_magic 00:07:52.006 ************************************ 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:52.006 16:56:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:52.006 [2024-07-15 16:56:42.230529] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:52.006 [2024-07-15 16:56:42.230617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63732 ] 00:07:52.006 { 00:07:52.006 "subsystems": [ 00:07:52.006 { 00:07:52.006 "subsystem": "bdev", 00:07:52.006 "config": [ 00:07:52.006 { 00:07:52.006 "params": { 00:07:52.006 "trtype": "pcie", 00:07:52.006 "traddr": "0000:00:10.0", 00:07:52.006 "name": "Nvme0" 00:07:52.006 }, 00:07:52.006 "method": "bdev_nvme_attach_controller" 00:07:52.006 }, 00:07:52.006 { 00:07:52.006 "params": { 00:07:52.006 "trtype": "pcie", 00:07:52.006 "traddr": "0000:00:11.0", 00:07:52.006 "name": "Nvme1" 00:07:52.006 }, 00:07:52.006 "method": "bdev_nvme_attach_controller" 00:07:52.006 }, 00:07:52.006 { 00:07:52.006 "method": "bdev_wait_for_examine" 00:07:52.006 } 00:07:52.006 ] 00:07:52.006 } 00:07:52.006 ] 00:07:52.006 } 00:07:52.264 [2024-07-15 16:56:42.366189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.264 [2024-07-15 16:56:42.478483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.264 [2024-07-15 16:56:42.534621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.779  Copying: 65/65 [MB] (average 942 MBps) 00:07:52.779 00:07:52.779 16:56:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:52.779 16:56:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:52.779 16:56:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:52.779 16:56:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:53.038 [2024-07-15 16:56:43.079995] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:53.038 [2024-07-15 16:56:43.080079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63752 ] 00:07:53.038 { 00:07:53.038 "subsystems": [ 00:07:53.038 { 00:07:53.038 "subsystem": "bdev", 00:07:53.038 "config": [ 00:07:53.038 { 00:07:53.038 "params": { 00:07:53.038 "trtype": "pcie", 00:07:53.038 "traddr": "0000:00:10.0", 00:07:53.038 "name": "Nvme0" 00:07:53.038 }, 00:07:53.038 "method": "bdev_nvme_attach_controller" 00:07:53.038 }, 00:07:53.038 { 00:07:53.038 "params": { 00:07:53.038 "trtype": "pcie", 00:07:53.038 "traddr": "0000:00:11.0", 00:07:53.038 "name": "Nvme1" 00:07:53.038 }, 00:07:53.038 "method": "bdev_nvme_attach_controller" 00:07:53.038 }, 00:07:53.038 { 00:07:53.038 "method": "bdev_wait_for_examine" 00:07:53.038 } 00:07:53.038 ] 00:07:53.038 } 00:07:53.038 ] 00:07:53.038 } 00:07:53.038 [2024-07-15 16:56:43.214938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.038 [2024-07-15 16:56:43.327949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.297 [2024-07-15 16:56:43.381727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.555  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:53.555 00:07:53.555 16:56:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:53.555 16:56:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:53.555 16:56:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:53.555 16:56:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:53.555 16:56:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:53.555 16:56:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:53.555 16:56:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:53.555 [2024-07-15 16:56:43.824171] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:53.555 [2024-07-15 16:56:43.824277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63772 ] 00:07:53.555 { 00:07:53.555 "subsystems": [ 00:07:53.555 { 00:07:53.555 "subsystem": "bdev", 00:07:53.555 "config": [ 00:07:53.555 { 00:07:53.555 "params": { 00:07:53.555 "trtype": "pcie", 00:07:53.555 "traddr": "0000:00:10.0", 00:07:53.555 "name": "Nvme0" 00:07:53.555 }, 00:07:53.555 "method": "bdev_nvme_attach_controller" 00:07:53.555 }, 00:07:53.555 { 00:07:53.555 "params": { 00:07:53.555 "trtype": "pcie", 00:07:53.555 "traddr": "0000:00:11.0", 00:07:53.555 "name": "Nvme1" 00:07:53.555 }, 00:07:53.555 "method": "bdev_nvme_attach_controller" 00:07:53.555 }, 00:07:53.555 { 00:07:53.555 "method": "bdev_wait_for_examine" 00:07:53.555 } 00:07:53.555 ] 00:07:53.555 } 00:07:53.555 ] 00:07:53.555 } 00:07:53.813 [2024-07-15 16:56:43.962193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.813 [2024-07-15 16:56:44.068685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.071 [2024-07-15 16:56:44.122263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.329  Copying: 65/65 [MB] (average 1160 MBps) 00:07:54.329 00:07:54.329 16:56:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:54.329 16:56:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:54.329 16:56:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:54.329 16:56:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:54.586 [2024-07-15 16:56:44.656182] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:54.586 [2024-07-15 16:56:44.656284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63792 ] 00:07:54.586 { 00:07:54.586 "subsystems": [ 00:07:54.586 { 00:07:54.586 "subsystem": "bdev", 00:07:54.586 "config": [ 00:07:54.586 { 00:07:54.586 "params": { 00:07:54.586 "trtype": "pcie", 00:07:54.586 "traddr": "0000:00:10.0", 00:07:54.586 "name": "Nvme0" 00:07:54.586 }, 00:07:54.586 "method": "bdev_nvme_attach_controller" 00:07:54.586 }, 00:07:54.586 { 00:07:54.586 "params": { 00:07:54.586 "trtype": "pcie", 00:07:54.586 "traddr": "0000:00:11.0", 00:07:54.586 "name": "Nvme1" 00:07:54.586 }, 00:07:54.586 "method": "bdev_nvme_attach_controller" 00:07:54.586 }, 00:07:54.586 { 00:07:54.586 "method": "bdev_wait_for_examine" 00:07:54.586 } 00:07:54.586 ] 00:07:54.586 } 00:07:54.586 ] 00:07:54.586 } 00:07:54.586 [2024-07-15 16:56:44.793525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.844 [2024-07-15 16:56:44.898607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.844 [2024-07-15 16:56:44.953591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.147  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:55.147 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:55.147 00:07:55.147 real 0m3.165s 00:07:55.147 user 0m2.347s 00:07:55.147 sys 0m0.879s 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:55.147 ************************************ 00:07:55.147 END TEST dd_offset_magic 00:07:55.147 ************************************ 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:55.147 16:56:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:55.441 [2024-07-15 16:56:45.443067] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:55.441 [2024-07-15 16:56:45.443169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63823 ] 00:07:55.441 { 00:07:55.441 "subsystems": [ 00:07:55.441 { 00:07:55.441 "subsystem": "bdev", 00:07:55.441 "config": [ 00:07:55.441 { 00:07:55.441 "params": { 00:07:55.441 "trtype": "pcie", 00:07:55.441 "traddr": "0000:00:10.0", 00:07:55.441 "name": "Nvme0" 00:07:55.441 }, 00:07:55.441 "method": "bdev_nvme_attach_controller" 00:07:55.441 }, 00:07:55.441 { 00:07:55.441 "params": { 00:07:55.441 "trtype": "pcie", 00:07:55.441 "traddr": "0000:00:11.0", 00:07:55.441 "name": "Nvme1" 00:07:55.441 }, 00:07:55.441 "method": "bdev_nvme_attach_controller" 00:07:55.441 }, 00:07:55.441 { 00:07:55.441 "method": "bdev_wait_for_examine" 00:07:55.441 } 00:07:55.441 ] 00:07:55.441 } 00:07:55.441 ] 00:07:55.441 } 00:07:55.441 [2024-07-15 16:56:45.583895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.441 [2024-07-15 16:56:45.732075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.699 [2024-07-15 16:56:45.790871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.959  Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:55.959 00:07:55.959 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:55.959 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:55.959 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:55.959 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:55.959 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:55.959 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:55.959 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:55.959 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:55.959 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:55.959 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:55.959 { 00:07:55.959 "subsystems": [ 00:07:55.959 { 00:07:55.959 "subsystem": "bdev", 00:07:55.959 "config": [ 00:07:55.959 { 00:07:55.959 "params": { 00:07:55.959 "trtype": "pcie", 00:07:55.959 "traddr": "0000:00:10.0", 00:07:55.959 "name": "Nvme0" 00:07:55.959 }, 00:07:55.959 "method": "bdev_nvme_attach_controller" 00:07:55.959 }, 00:07:55.959 { 00:07:55.959 "params": { 00:07:55.959 "trtype": "pcie", 00:07:55.959 "traddr": "0000:00:11.0", 00:07:55.959 "name": "Nvme1" 00:07:55.959 }, 00:07:55.959 "method": "bdev_nvme_attach_controller" 00:07:55.959 }, 00:07:55.959 { 00:07:55.959 "method": "bdev_wait_for_examine" 00:07:55.959 } 00:07:55.959 ] 00:07:55.959 } 00:07:55.959 ] 00:07:55.959 } 00:07:55.959 [2024-07-15 16:56:46.253855] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:55.959 [2024-07-15 16:56:46.253952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63844 ] 00:07:56.218 [2024-07-15 16:56:46.393212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.218 [2024-07-15 16:56:46.502893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.478 [2024-07-15 16:56:46.557849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:56.737  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:56.737 00:07:56.737 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:56.737 00:07:56.737 real 0m7.580s 00:07:56.737 user 0m5.651s 00:07:56.737 sys 0m3.361s 00:07:56.737 ************************************ 00:07:56.737 END TEST spdk_dd_bdev_to_bdev 00:07:56.737 ************************************ 00:07:56.737 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.737 16:56:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:56.737 16:56:47 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:56.737 16:56:47 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:56.737 16:56:47 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:56.737 16:56:47 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.737 16:56:47 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.737 16:56:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:56.737 ************************************ 00:07:56.737 START TEST spdk_dd_uring 00:07:56.737 ************************************ 00:07:56.737 16:56:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:56.996 * Looking for test storage... 00:07:56.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:56.996 16:56:47 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.996 16:56:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.996 16:56:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.996 16:56:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.996 16:56:47 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.996 16:56:47 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:56.997 ************************************ 00:07:56.997 START TEST dd_uring_copy 00:07:56.997 ************************************ 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=fceirij43ae5ezutkw1g543pzfakv90mcklxfsawrvk1etlfblxcx5llc9v0eguw6i24rzuwwy6sbl7x50j0zdzza4j11dxwtnrdjauwr71s3t5b94xgrwbyta5du41ddr81e0lo1ltmxhk544hmabwd33s3m9oxb2uq12l1p4br5cpalppotky6lbiu3urw8b2dxdjx30cgt6tkyzqgeta2ddnaqb66vdi10531wcr0cnznz14yfsj3pc1md5iaxh8wilpdrj1ql2oymcdlvf9fv9i2p96xp4w7x96da77pbrqgiod8raqxt48xqd2qw4piq1cwajkkkn16xh3jgy1x6esxbwg3jd09g4it6u1vk8y3b63vlyeft57o5tburmh6j5u2w07gzhij288sdbrnkccce0domlu1jilhv3fs743b6fakjrwsllgls5xifxsnhh6hkkzlbp54htzzr1yzym0k4bvr3hdqlfi93l41oz71deqspkj8h6gesla38nljd8aqfnh4ykvjsrptmjuhcx6xahlq0jywp4h1q6h22hx26skdy4gsmkvi5fkvamrvnwe55ctpi1ukasgwwyutoxpoxdohkmh77k6eck0e73ik5yii3c0hwsvdhjwx4fir10yh4e0nsbve26i5bogtju5dwy26ciwkpzy7y5lpt0t983azt94u3v5s1fj5u5dedekufza3pn341qvxzigse91aa4lgglhyo7hlpmw8yjtveij20bncani62ivw26uxcexalhhqvqotv0fip6wp62jf4akrr73s94u4raj5szp2p39n0uugahk7bdc4vrysvoecn32uzyp9u2wohzwcs5yh4y13qwmz6bsx2j9myqb820l5qmmfiofdhbxl5tx1on0s8w4v77cwtylh8wie0ot9b045kr6dj6nmzhcm8ze50xghyhtbikji6qn2hyanw8i0pb0hcnljv3cjlnhinnuoacmot0njj7m5d5f5eumghszfy3efg9yegb59 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo fceirij43ae5ezutkw1g543pzfakv90mcklxfsawrvk1etlfblxcx5llc9v0eguw6i24rzuwwy6sbl7x50j0zdzza4j11dxwtnrdjauwr71s3t5b94xgrwbyta5du41ddr81e0lo1ltmxhk544hmabwd33s3m9oxb2uq12l1p4br5cpalppotky6lbiu3urw8b2dxdjx30cgt6tkyzqgeta2ddnaqb66vdi10531wcr0cnznz14yfsj3pc1md5iaxh8wilpdrj1ql2oymcdlvf9fv9i2p96xp4w7x96da77pbrqgiod8raqxt48xqd2qw4piq1cwajkkkn16xh3jgy1x6esxbwg3jd09g4it6u1vk8y3b63vlyeft57o5tburmh6j5u2w07gzhij288sdbrnkccce0domlu1jilhv3fs743b6fakjrwsllgls5xifxsnhh6hkkzlbp54htzzr1yzym0k4bvr3hdqlfi93l41oz71deqspkj8h6gesla38nljd8aqfnh4ykvjsrptmjuhcx6xahlq0jywp4h1q6h22hx26skdy4gsmkvi5fkvamrvnwe55ctpi1ukasgwwyutoxpoxdohkmh77k6eck0e73ik5yii3c0hwsvdhjwx4fir10yh4e0nsbve26i5bogtju5dwy26ciwkpzy7y5lpt0t983azt94u3v5s1fj5u5dedekufza3pn341qvxzigse91aa4lgglhyo7hlpmw8yjtveij20bncani62ivw26uxcexalhhqvqotv0fip6wp62jf4akrr73s94u4raj5szp2p39n0uugahk7bdc4vrysvoecn32uzyp9u2wohzwcs5yh4y13qwmz6bsx2j9myqb820l5qmmfiofdhbxl5tx1on0s8w4v77cwtylh8wie0ot9b045kr6dj6nmzhcm8ze50xghyhtbikji6qn2hyanw8i0pb0hcnljv3cjlnhinnuoacmot0njj7m5d5f5eumghszfy3efg9yegb59 00:07:56.997 16:56:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:56.997 [2024-07-15 16:56:47.199746] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:56.997 [2024-07-15 16:56:47.199851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63914 ] 00:07:57.257 [2024-07-15 16:56:47.338259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.257 [2024-07-15 16:56:47.455757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.257 [2024-07-15 16:56:47.509367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.453  Copying: 511/511 [MB] (average 1296 MBps) 00:07:58.453 00:07:58.453 16:56:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:58.453 16:56:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:58.453 16:56:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:58.453 16:56:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:58.453 [2024-07-15 16:56:48.578999] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:58.453 [2024-07-15 16:56:48.579069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63930 ] 00:07:58.453 { 00:07:58.453 "subsystems": [ 00:07:58.453 { 00:07:58.453 "subsystem": "bdev", 00:07:58.453 "config": [ 00:07:58.453 { 00:07:58.453 "params": { 00:07:58.453 "block_size": 512, 00:07:58.453 "num_blocks": 1048576, 00:07:58.453 "name": "malloc0" 00:07:58.453 }, 00:07:58.453 "method": "bdev_malloc_create" 00:07:58.453 }, 00:07:58.453 { 00:07:58.453 "params": { 00:07:58.453 "filename": "/dev/zram1", 00:07:58.453 "name": "uring0" 00:07:58.453 }, 00:07:58.453 "method": "bdev_uring_create" 00:07:58.453 }, 00:07:58.453 { 00:07:58.453 "method": "bdev_wait_for_examine" 00:07:58.453 } 00:07:58.453 ] 00:07:58.453 } 00:07:58.453 ] 00:07:58.453 } 00:07:58.453 [2024-07-15 16:56:48.712126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.712 [2024-07-15 16:56:48.820997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.712 [2024-07-15 16:56:48.876028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:01.542  Copying: 222/512 [MB] (222 MBps) Copying: 446/512 [MB] (223 MBps) Copying: 512/512 [MB] (average 222 MBps) 00:08:01.542 00:08:01.542 16:56:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:01.542 16:56:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:01.542 16:56:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:01.542 16:56:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:01.542 [2024-07-15 16:56:51.825738] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:01.543 [2024-07-15 16:56:51.825835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63980 ] 00:08:01.801 { 00:08:01.801 "subsystems": [ 00:08:01.801 { 00:08:01.801 "subsystem": "bdev", 00:08:01.801 "config": [ 00:08:01.801 { 00:08:01.801 "params": { 00:08:01.801 "block_size": 512, 00:08:01.801 "num_blocks": 1048576, 00:08:01.801 "name": "malloc0" 00:08:01.801 }, 00:08:01.801 "method": "bdev_malloc_create" 00:08:01.801 }, 00:08:01.801 { 00:08:01.801 "params": { 00:08:01.801 "filename": "/dev/zram1", 00:08:01.801 "name": "uring0" 00:08:01.801 }, 00:08:01.801 "method": "bdev_uring_create" 00:08:01.801 }, 00:08:01.801 { 00:08:01.801 "method": "bdev_wait_for_examine" 00:08:01.801 } 00:08:01.801 ] 00:08:01.801 } 00:08:01.801 ] 00:08:01.801 } 00:08:01.801 [2024-07-15 16:56:51.956717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.802 [2024-07-15 16:56:52.060432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.060 [2024-07-15 16:56:52.116141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:05.566  Copying: 190/512 [MB] (190 MBps) Copying: 364/512 [MB] (174 MBps) Copying: 512/512 [MB] (average 173 MBps) 00:08:05.566 00:08:05.566 16:56:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:05.566 16:56:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ fceirij43ae5ezutkw1g543pzfakv90mcklxfsawrvk1etlfblxcx5llc9v0eguw6i24rzuwwy6sbl7x50j0zdzza4j11dxwtnrdjauwr71s3t5b94xgrwbyta5du41ddr81e0lo1ltmxhk544hmabwd33s3m9oxb2uq12l1p4br5cpalppotky6lbiu3urw8b2dxdjx30cgt6tkyzqgeta2ddnaqb66vdi10531wcr0cnznz14yfsj3pc1md5iaxh8wilpdrj1ql2oymcdlvf9fv9i2p96xp4w7x96da77pbrqgiod8raqxt48xqd2qw4piq1cwajkkkn16xh3jgy1x6esxbwg3jd09g4it6u1vk8y3b63vlyeft57o5tburmh6j5u2w07gzhij288sdbrnkccce0domlu1jilhv3fs743b6fakjrwsllgls5xifxsnhh6hkkzlbp54htzzr1yzym0k4bvr3hdqlfi93l41oz71deqspkj8h6gesla38nljd8aqfnh4ykvjsrptmjuhcx6xahlq0jywp4h1q6h22hx26skdy4gsmkvi5fkvamrvnwe55ctpi1ukasgwwyutoxpoxdohkmh77k6eck0e73ik5yii3c0hwsvdhjwx4fir10yh4e0nsbve26i5bogtju5dwy26ciwkpzy7y5lpt0t983azt94u3v5s1fj5u5dedekufza3pn341qvxzigse91aa4lgglhyo7hlpmw8yjtveij20bncani62ivw26uxcexalhhqvqotv0fip6wp62jf4akrr73s94u4raj5szp2p39n0uugahk7bdc4vrysvoecn32uzyp9u2wohzwcs5yh4y13qwmz6bsx2j9myqb820l5qmmfiofdhbxl5tx1on0s8w4v77cwtylh8wie0ot9b045kr6dj6nmzhcm8ze50xghyhtbikji6qn2hyanw8i0pb0hcnljv3cjlnhinnuoacmot0njj7m5d5f5eumghszfy3efg9yegb59 == \f\c\e\i\r\i\j\4\3\a\e\5\e\z\u\t\k\w\1\g\5\4\3\p\z\f\a\k\v\9\0\m\c\k\l\x\f\s\a\w\r\v\k\1\e\t\l\f\b\l\x\c\x\5\l\l\c\9\v\0\e\g\u\w\6\i\2\4\r\z\u\w\w\y\6\s\b\l\7\x\5\0\j\0\z\d\z\z\a\4\j\1\1\d\x\w\t\n\r\d\j\a\u\w\r\7\1\s\3\t\5\b\9\4\x\g\r\w\b\y\t\a\5\d\u\4\1\d\d\r\8\1\e\0\l\o\1\l\t\m\x\h\k\5\4\4\h\m\a\b\w\d\3\3\s\3\m\9\o\x\b\2\u\q\1\2\l\1\p\4\b\r\5\c\p\a\l\p\p\o\t\k\y\6\l\b\i\u\3\u\r\w\8\b\2\d\x\d\j\x\3\0\c\g\t\6\t\k\y\z\q\g\e\t\a\2\d\d\n\a\q\b\6\6\v\d\i\1\0\5\3\1\w\c\r\0\c\n\z\n\z\1\4\y\f\s\j\3\p\c\1\m\d\5\i\a\x\h\8\w\i\l\p\d\r\j\1\q\l\2\o\y\m\c\d\l\v\f\9\f\v\9\i\2\p\9\6\x\p\4\w\7\x\9\6\d\a\7\7\p\b\r\q\g\i\o\d\8\r\a\q\x\t\4\8\x\q\d\2\q\w\4\p\i\q\1\c\w\a\j\k\k\k\n\1\6\x\h\3\j\g\y\1\x\6\e\s\x\b\w\g\3\j\d\0\9\g\4\i\t\6\u\1\v\k\8\y\3\b\6\3\v\l\y\e\f\t\5\7\o\5\t\b\u\r\m\h\6\j\5\u\2\w\0\7\g\z\h\i\j\2\8\8\s\d\b\r\n\k\c\c\c\e\0\d\o\m\l\u\1\j\i\l\h\v\3\f\s\7\4\3\b\6\f\a\k\j\r\w\s\l\l\g\l\s\5\x\i\f\x\s\n\h\h\6\h\k\k\z\l\b\p\5\4\h\t\z\z\r\1\y\z\y\m\0\k\4\b\v\r\3\h\d\q\l\f\i\9\3\l\4\1\o\z\7\1\d\e\q\s\p\k\j\8\h\6\g\e\s\l\a\3\8\n\l\j\d\8\a\q\f\n\h\4\y\k\v\j\s\r\p\t\m\j\u\h\c\x\6\x\a\h\l\q\0\j\y\w\p\4\h\1\q\6\h\2\2\h\x\2\6\s\k\d\y\4\g\s\m\k\v\i\5\f\k\v\a\m\r\v\n\w\e\5\5\c\t\p\i\1\u\k\a\s\g\w\w\y\u\t\o\x\p\o\x\d\o\h\k\m\h\7\7\k\6\e\c\k\0\e\7\3\i\k\5\y\i\i\3\c\0\h\w\s\v\d\h\j\w\x\4\f\i\r\1\0\y\h\4\e\0\n\s\b\v\e\2\6\i\5\b\o\g\t\j\u\5\d\w\y\2\6\c\i\w\k\p\z\y\7\y\5\l\p\t\0\t\9\8\3\a\z\t\9\4\u\3\v\5\s\1\f\j\5\u\5\d\e\d\e\k\u\f\z\a\3\p\n\3\4\1\q\v\x\z\i\g\s\e\9\1\a\a\4\l\g\g\l\h\y\o\7\h\l\p\m\w\8\y\j\t\v\e\i\j\2\0\b\n\c\a\n\i\6\2\i\v\w\2\6\u\x\c\e\x\a\l\h\h\q\v\q\o\t\v\0\f\i\p\6\w\p\6\2\j\f\4\a\k\r\r\7\3\s\9\4\u\4\r\a\j\5\s\z\p\2\p\3\9\n\0\u\u\g\a\h\k\7\b\d\c\4\v\r\y\s\v\o\e\c\n\3\2\u\z\y\p\9\u\2\w\o\h\z\w\c\s\5\y\h\4\y\1\3\q\w\m\z\6\b\s\x\2\j\9\m\y\q\b\8\2\0\l\5\q\m\m\f\i\o\f\d\h\b\x\l\5\t\x\1\o\n\0\s\8\w\4\v\7\7\c\w\t\y\l\h\8\w\i\e\0\o\t\9\b\0\4\5\k\r\6\d\j\6\n\m\z\h\c\m\8\z\e\5\0\x\g\h\y\h\t\b\i\k\j\i\6\q\n\2\h\y\a\n\w\8\i\0\p\b\0\h\c\n\l\j\v\3\c\j\l\n\h\i\n\n\u\o\a\c\m\o\t\0\n\j\j\7\m\5\d\5\f\5\e\u\m\g\h\s\z\f\y\3\e\f\g\9\y\e\g\b\5\9 ]] 00:08:05.566 16:56:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:05.566 16:56:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ fceirij43ae5ezutkw1g543pzfakv90mcklxfsawrvk1etlfblxcx5llc9v0eguw6i24rzuwwy6sbl7x50j0zdzza4j11dxwtnrdjauwr71s3t5b94xgrwbyta5du41ddr81e0lo1ltmxhk544hmabwd33s3m9oxb2uq12l1p4br5cpalppotky6lbiu3urw8b2dxdjx30cgt6tkyzqgeta2ddnaqb66vdi10531wcr0cnznz14yfsj3pc1md5iaxh8wilpdrj1ql2oymcdlvf9fv9i2p96xp4w7x96da77pbrqgiod8raqxt48xqd2qw4piq1cwajkkkn16xh3jgy1x6esxbwg3jd09g4it6u1vk8y3b63vlyeft57o5tburmh6j5u2w07gzhij288sdbrnkccce0domlu1jilhv3fs743b6fakjrwsllgls5xifxsnhh6hkkzlbp54htzzr1yzym0k4bvr3hdqlfi93l41oz71deqspkj8h6gesla38nljd8aqfnh4ykvjsrptmjuhcx6xahlq0jywp4h1q6h22hx26skdy4gsmkvi5fkvamrvnwe55ctpi1ukasgwwyutoxpoxdohkmh77k6eck0e73ik5yii3c0hwsvdhjwx4fir10yh4e0nsbve26i5bogtju5dwy26ciwkpzy7y5lpt0t983azt94u3v5s1fj5u5dedekufza3pn341qvxzigse91aa4lgglhyo7hlpmw8yjtveij20bncani62ivw26uxcexalhhqvqotv0fip6wp62jf4akrr73s94u4raj5szp2p39n0uugahk7bdc4vrysvoecn32uzyp9u2wohzwcs5yh4y13qwmz6bsx2j9myqb820l5qmmfiofdhbxl5tx1on0s8w4v77cwtylh8wie0ot9b045kr6dj6nmzhcm8ze50xghyhtbikji6qn2hyanw8i0pb0hcnljv3cjlnhinnuoacmot0njj7m5d5f5eumghszfy3efg9yegb59 == \f\c\e\i\r\i\j\4\3\a\e\5\e\z\u\t\k\w\1\g\5\4\3\p\z\f\a\k\v\9\0\m\c\k\l\x\f\s\a\w\r\v\k\1\e\t\l\f\b\l\x\c\x\5\l\l\c\9\v\0\e\g\u\w\6\i\2\4\r\z\u\w\w\y\6\s\b\l\7\x\5\0\j\0\z\d\z\z\a\4\j\1\1\d\x\w\t\n\r\d\j\a\u\w\r\7\1\s\3\t\5\b\9\4\x\g\r\w\b\y\t\a\5\d\u\4\1\d\d\r\8\1\e\0\l\o\1\l\t\m\x\h\k\5\4\4\h\m\a\b\w\d\3\3\s\3\m\9\o\x\b\2\u\q\1\2\l\1\p\4\b\r\5\c\p\a\l\p\p\o\t\k\y\6\l\b\i\u\3\u\r\w\8\b\2\d\x\d\j\x\3\0\c\g\t\6\t\k\y\z\q\g\e\t\a\2\d\d\n\a\q\b\6\6\v\d\i\1\0\5\3\1\w\c\r\0\c\n\z\n\z\1\4\y\f\s\j\3\p\c\1\m\d\5\i\a\x\h\8\w\i\l\p\d\r\j\1\q\l\2\o\y\m\c\d\l\v\f\9\f\v\9\i\2\p\9\6\x\p\4\w\7\x\9\6\d\a\7\7\p\b\r\q\g\i\o\d\8\r\a\q\x\t\4\8\x\q\d\2\q\w\4\p\i\q\1\c\w\a\j\k\k\k\n\1\6\x\h\3\j\g\y\1\x\6\e\s\x\b\w\g\3\j\d\0\9\g\4\i\t\6\u\1\v\k\8\y\3\b\6\3\v\l\y\e\f\t\5\7\o\5\t\b\u\r\m\h\6\j\5\u\2\w\0\7\g\z\h\i\j\2\8\8\s\d\b\r\n\k\c\c\c\e\0\d\o\m\l\u\1\j\i\l\h\v\3\f\s\7\4\3\b\6\f\a\k\j\r\w\s\l\l\g\l\s\5\x\i\f\x\s\n\h\h\6\h\k\k\z\l\b\p\5\4\h\t\z\z\r\1\y\z\y\m\0\k\4\b\v\r\3\h\d\q\l\f\i\9\3\l\4\1\o\z\7\1\d\e\q\s\p\k\j\8\h\6\g\e\s\l\a\3\8\n\l\j\d\8\a\q\f\n\h\4\y\k\v\j\s\r\p\t\m\j\u\h\c\x\6\x\a\h\l\q\0\j\y\w\p\4\h\1\q\6\h\2\2\h\x\2\6\s\k\d\y\4\g\s\m\k\v\i\5\f\k\v\a\m\r\v\n\w\e\5\5\c\t\p\i\1\u\k\a\s\g\w\w\y\u\t\o\x\p\o\x\d\o\h\k\m\h\7\7\k\6\e\c\k\0\e\7\3\i\k\5\y\i\i\3\c\0\h\w\s\v\d\h\j\w\x\4\f\i\r\1\0\y\h\4\e\0\n\s\b\v\e\2\6\i\5\b\o\g\t\j\u\5\d\w\y\2\6\c\i\w\k\p\z\y\7\y\5\l\p\t\0\t\9\8\3\a\z\t\9\4\u\3\v\5\s\1\f\j\5\u\5\d\e\d\e\k\u\f\z\a\3\p\n\3\4\1\q\v\x\z\i\g\s\e\9\1\a\a\4\l\g\g\l\h\y\o\7\h\l\p\m\w\8\y\j\t\v\e\i\j\2\0\b\n\c\a\n\i\6\2\i\v\w\2\6\u\x\c\e\x\a\l\h\h\q\v\q\o\t\v\0\f\i\p\6\w\p\6\2\j\f\4\a\k\r\r\7\3\s\9\4\u\4\r\a\j\5\s\z\p\2\p\3\9\n\0\u\u\g\a\h\k\7\b\d\c\4\v\r\y\s\v\o\e\c\n\3\2\u\z\y\p\9\u\2\w\o\h\z\w\c\s\5\y\h\4\y\1\3\q\w\m\z\6\b\s\x\2\j\9\m\y\q\b\8\2\0\l\5\q\m\m\f\i\o\f\d\h\b\x\l\5\t\x\1\o\n\0\s\8\w\4\v\7\7\c\w\t\y\l\h\8\w\i\e\0\o\t\9\b\0\4\5\k\r\6\d\j\6\n\m\z\h\c\m\8\z\e\5\0\x\g\h\y\h\t\b\i\k\j\i\6\q\n\2\h\y\a\n\w\8\i\0\p\b\0\h\c\n\l\j\v\3\c\j\l\n\h\i\n\n\u\o\a\c\m\o\t\0\n\j\j\7\m\5\d\5\f\5\e\u\m\g\h\s\z\f\y\3\e\f\g\9\y\e\g\b\5\9 ]] 00:08:05.566 16:56:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:05.825 16:56:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:05.825 16:56:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:05.825 16:56:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:05.825 16:56:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:05.825 [2024-07-15 16:56:56.081213] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:05.825 [2024-07-15 16:56:56.081298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64058 ] 00:08:05.825 { 00:08:05.825 "subsystems": [ 00:08:05.825 { 00:08:05.825 "subsystem": "bdev", 00:08:05.825 "config": [ 00:08:05.825 { 00:08:05.825 "params": { 00:08:05.825 "block_size": 512, 00:08:05.825 "num_blocks": 1048576, 00:08:05.825 "name": "malloc0" 00:08:05.825 }, 00:08:05.825 "method": "bdev_malloc_create" 00:08:05.825 }, 00:08:05.825 { 00:08:05.825 "params": { 00:08:05.825 "filename": "/dev/zram1", 00:08:05.825 "name": "uring0" 00:08:05.825 }, 00:08:05.825 "method": "bdev_uring_create" 00:08:05.825 }, 00:08:05.825 { 00:08:05.825 "method": "bdev_wait_for_examine" 00:08:05.825 } 00:08:05.825 ] 00:08:05.825 } 00:08:05.825 ] 00:08:05.825 } 00:08:06.083 [2024-07-15 16:56:56.214873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.083 [2024-07-15 16:56:56.328678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.341 [2024-07-15 16:56:56.382809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.173  Copying: 150/512 [MB] (150 MBps) Copying: 296/512 [MB] (145 MBps) Copying: 445/512 [MB] (149 MBps) Copying: 512/512 [MB] (average 148 MBps) 00:08:10.173 00:08:10.173 16:57:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:10.173 16:57:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:10.173 16:57:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:10.173 16:57:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:10.173 16:57:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:10.173 16:57:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:10.173 16:57:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:10.173 16:57:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:10.432 [2024-07-15 16:57:00.491351] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:10.432 [2024-07-15 16:57:00.491467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64114 ] 00:08:10.432 { 00:08:10.432 "subsystems": [ 00:08:10.432 { 00:08:10.432 "subsystem": "bdev", 00:08:10.432 "config": [ 00:08:10.432 { 00:08:10.432 "params": { 00:08:10.432 "block_size": 512, 00:08:10.432 "num_blocks": 1048576, 00:08:10.432 "name": "malloc0" 00:08:10.432 }, 00:08:10.432 "method": "bdev_malloc_create" 00:08:10.432 }, 00:08:10.432 { 00:08:10.432 "params": { 00:08:10.432 "filename": "/dev/zram1", 00:08:10.432 "name": "uring0" 00:08:10.432 }, 00:08:10.432 "method": "bdev_uring_create" 00:08:10.432 }, 00:08:10.432 { 00:08:10.432 "params": { 00:08:10.432 "name": "uring0" 00:08:10.432 }, 00:08:10.432 "method": "bdev_uring_delete" 00:08:10.432 }, 00:08:10.432 { 00:08:10.432 "method": "bdev_wait_for_examine" 00:08:10.432 } 00:08:10.432 ] 00:08:10.432 } 00:08:10.432 ] 00:08:10.432 } 00:08:10.432 [2024-07-15 16:57:00.629493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.690 [2024-07-15 16:57:00.746002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.690 [2024-07-15 16:57:00.800592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:11.207  Copying: 0/0 [B] (average 0 Bps) 00:08:11.207 00:08:11.207 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:11.207 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:11.207 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:08:11.207 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:11.207 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:11.208 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.208 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:11.208 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:11.208 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.208 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.208 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.208 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.208 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.208 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.208 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:11.208 16:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:11.208 [2024-07-15 16:57:01.469290] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:11.208 [2024-07-15 16:57:01.469396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64143 ] 00:08:11.208 { 00:08:11.208 "subsystems": [ 00:08:11.208 { 00:08:11.208 "subsystem": "bdev", 00:08:11.208 "config": [ 00:08:11.208 { 00:08:11.208 "params": { 00:08:11.208 "block_size": 512, 00:08:11.208 "num_blocks": 1048576, 00:08:11.208 "name": "malloc0" 00:08:11.208 }, 00:08:11.208 "method": "bdev_malloc_create" 00:08:11.208 }, 00:08:11.208 { 00:08:11.208 "params": { 00:08:11.208 "filename": "/dev/zram1", 00:08:11.208 "name": "uring0" 00:08:11.208 }, 00:08:11.208 "method": "bdev_uring_create" 00:08:11.208 }, 00:08:11.208 { 00:08:11.208 "params": { 00:08:11.208 "name": "uring0" 00:08:11.208 }, 00:08:11.208 "method": "bdev_uring_delete" 00:08:11.208 }, 00:08:11.208 { 00:08:11.208 "method": "bdev_wait_for_examine" 00:08:11.208 } 00:08:11.208 ] 00:08:11.208 } 00:08:11.208 ] 00:08:11.208 } 00:08:11.466 [2024-07-15 16:57:01.601121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.466 [2024-07-15 16:57:01.712942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.724 [2024-07-15 16:57:01.767782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:11.724 [2024-07-15 16:57:01.975090] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:11.724 [2024-07-15 16:57:01.975160] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:11.725 [2024-07-15 16:57:01.975173] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:11.725 [2024-07-15 16:57:01.975184] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.000 [2024-07-15 16:57:02.286112] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:08:12.258 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:12.516 00:08:12.516 real 0m15.521s 00:08:12.516 user 0m10.622s 00:08:12.516 sys 0m12.448s 00:08:12.516 ************************************ 00:08:12.516 END TEST dd_uring_copy 00:08:12.516 ************************************ 00:08:12.516 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.516 16:57:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:12.516 16:57:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:08:12.516 ************************************ 00:08:12.516 END TEST spdk_dd_uring 00:08:12.516 ************************************ 00:08:12.516 00:08:12.516 real 0m15.653s 00:08:12.516 user 0m10.689s 00:08:12.516 sys 0m12.517s 00:08:12.516 16:57:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.516 16:57:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:12.516 16:57:02 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:12.516 16:57:02 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:12.516 16:57:02 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.516 16:57:02 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.516 16:57:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:12.516 ************************************ 00:08:12.516 START TEST spdk_dd_sparse 00:08:12.516 ************************************ 00:08:12.516 16:57:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:12.516 * Looking for test storage... 00:08:12.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:12.516 16:57:02 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.516 16:57:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.516 16:57:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.516 16:57:02 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.516 16:57:02 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:12.517 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:12.774 1+0 records in 00:08:12.774 1+0 records out 00:08:12.774 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00592328 s, 708 MB/s 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:12.775 1+0 records in 00:08:12.775 1+0 records out 00:08:12.775 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00722308 s, 581 MB/s 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:12.775 1+0 records in 00:08:12.775 1+0 records out 00:08:12.775 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00550702 s, 762 MB/s 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:12.775 ************************************ 00:08:12.775 START TEST dd_sparse_file_to_file 00:08:12.775 ************************************ 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:12.775 16:57:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:12.775 [2024-07-15 16:57:02.891126] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:12.775 [2024-07-15 16:57:02.891416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64234 ] 00:08:12.775 { 00:08:12.775 "subsystems": [ 00:08:12.775 { 00:08:12.775 "subsystem": "bdev", 00:08:12.775 "config": [ 00:08:12.775 { 00:08:12.775 "params": { 00:08:12.775 "block_size": 4096, 00:08:12.775 "filename": "dd_sparse_aio_disk", 00:08:12.775 "name": "dd_aio" 00:08:12.775 }, 00:08:12.775 "method": "bdev_aio_create" 00:08:12.775 }, 00:08:12.775 { 00:08:12.775 "params": { 00:08:12.775 "lvs_name": "dd_lvstore", 00:08:12.775 "bdev_name": "dd_aio" 00:08:12.775 }, 00:08:12.775 "method": "bdev_lvol_create_lvstore" 00:08:12.775 }, 00:08:12.775 { 00:08:12.775 "method": "bdev_wait_for_examine" 00:08:12.775 } 00:08:12.775 ] 00:08:12.775 } 00:08:12.775 ] 00:08:12.775 } 00:08:12.775 [2024-07-15 16:57:03.023008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.032 [2024-07-15 16:57:03.140405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.032 [2024-07-15 16:57:03.195758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.290  Copying: 12/36 [MB] (average 1090 MBps) 00:08:13.290 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:13.290 00:08:13.290 real 0m0.742s 00:08:13.290 user 0m0.485s 00:08:13.290 sys 0m0.343s 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.290 ************************************ 00:08:13.290 END TEST dd_sparse_file_to_file 00:08:13.290 ************************************ 00:08:13.290 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:13.549 ************************************ 00:08:13.549 START TEST dd_sparse_file_to_bdev 00:08:13.549 ************************************ 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:13.549 16:57:03 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:13.549 [2024-07-15 16:57:03.688604] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:13.549 [2024-07-15 16:57:03.688709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64277 ] 00:08:13.549 { 00:08:13.549 "subsystems": [ 00:08:13.549 { 00:08:13.549 "subsystem": "bdev", 00:08:13.549 "config": [ 00:08:13.549 { 00:08:13.549 "params": { 00:08:13.549 "block_size": 4096, 00:08:13.549 "filename": "dd_sparse_aio_disk", 00:08:13.549 "name": "dd_aio" 00:08:13.549 }, 00:08:13.549 "method": "bdev_aio_create" 00:08:13.549 }, 00:08:13.549 { 00:08:13.549 "params": { 00:08:13.549 "lvs_name": "dd_lvstore", 00:08:13.549 "lvol_name": "dd_lvol", 00:08:13.549 "size_in_mib": 36, 00:08:13.549 "thin_provision": true 00:08:13.549 }, 00:08:13.549 "method": "bdev_lvol_create" 00:08:13.549 }, 00:08:13.549 { 00:08:13.549 "method": "bdev_wait_for_examine" 00:08:13.549 } 00:08:13.549 ] 00:08:13.549 } 00:08:13.549 ] 00:08:13.549 } 00:08:13.549 [2024-07-15 16:57:03.828834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.807 [2024-07-15 16:57:03.987659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.807 [2024-07-15 16:57:04.063964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.323  Copying: 12/36 [MB] (average 500 MBps) 00:08:14.323 00:08:14.323 00:08:14.323 real 0m0.872s 00:08:14.323 user 0m0.589s 00:08:14.323 sys 0m0.436s 00:08:14.323 ************************************ 00:08:14.324 END TEST dd_sparse_file_to_bdev 00:08:14.324 ************************************ 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:14.324 ************************************ 00:08:14.324 START TEST dd_sparse_bdev_to_file 00:08:14.324 ************************************ 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:14.324 16:57:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:14.324 [2024-07-15 16:57:04.613208] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:14.324 [2024-07-15 16:57:04.613308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64315 ] 00:08:14.324 { 00:08:14.324 "subsystems": [ 00:08:14.324 { 00:08:14.324 "subsystem": "bdev", 00:08:14.324 "config": [ 00:08:14.324 { 00:08:14.324 "params": { 00:08:14.324 "block_size": 4096, 00:08:14.324 "filename": "dd_sparse_aio_disk", 00:08:14.324 "name": "dd_aio" 00:08:14.324 }, 00:08:14.324 "method": "bdev_aio_create" 00:08:14.324 }, 00:08:14.324 { 00:08:14.324 "method": "bdev_wait_for_examine" 00:08:14.324 } 00:08:14.324 ] 00:08:14.324 } 00:08:14.324 ] 00:08:14.324 } 00:08:14.582 [2024-07-15 16:57:04.753980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.839 [2024-07-15 16:57:04.911471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.839 [2024-07-15 16:57:04.988726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:15.406  Copying: 12/36 [MB] (average 857 MBps) 00:08:15.406 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:15.406 ************************************ 00:08:15.406 END TEST dd_sparse_bdev_to_file 00:08:15.406 ************************************ 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:15.406 00:08:15.406 real 0m0.888s 00:08:15.406 user 0m0.594s 00:08:15.406 sys 0m0.444s 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:15.406 ************************************ 00:08:15.406 END TEST spdk_dd_sparse 00:08:15.406 ************************************ 00:08:15.406 00:08:15.406 real 0m2.779s 00:08:15.406 user 0m1.756s 00:08:15.406 sys 0m1.404s 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.406 16:57:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:15.406 16:57:05 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:15.406 16:57:05 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:15.406 16:57:05 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.406 16:57:05 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.406 16:57:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:15.406 ************************************ 00:08:15.406 START TEST spdk_dd_negative 00:08:15.406 ************************************ 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:15.406 * Looking for test storage... 00:08:15.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:15.406 ************************************ 00:08:15.406 START TEST dd_invalid_arguments 00:08:15.406 ************************************ 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.406 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:15.406 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:15.406 00:08:15.406 CPU options: 00:08:15.406 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:15.406 (like [0,1,10]) 00:08:15.406 --lcores lcore to CPU mapping list. The list is in the format: 00:08:15.406 [<,lcores[@CPUs]>...] 00:08:15.406 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:15.406 Within the group, '-' is used for range separator, 00:08:15.406 ',' is used for single number separator. 00:08:15.406 '( )' can be omitted for single element group, 00:08:15.406 '@' can be omitted if cpus and lcores have the same value 00:08:15.406 --disable-cpumask-locks Disable CPU core lock files. 00:08:15.406 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:15.406 pollers in the app support interrupt mode) 00:08:15.406 -p, --main-core main (primary) core for DPDK 00:08:15.406 00:08:15.406 Configuration options: 00:08:15.406 -c, --config, --json JSON config file 00:08:15.406 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:15.406 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:15.406 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:15.406 --rpcs-allowed comma-separated list of permitted RPCS 00:08:15.406 --json-ignore-init-errors don't exit on invalid config entry 00:08:15.406 00:08:15.406 Memory options: 00:08:15.406 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:15.406 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:15.406 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:15.406 -R, --huge-unlink unlink huge files after initialization 00:08:15.406 -n, --mem-channels number of memory channels used for DPDK 00:08:15.406 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:15.406 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:15.406 --no-huge run without using hugepages 00:08:15.406 -i, --shm-id shared memory ID (optional) 00:08:15.406 -g, --single-file-segments force creating just one hugetlbfs file 00:08:15.406 00:08:15.406 PCI options: 00:08:15.406 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:15.406 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:15.406 -u, --no-pci disable PCI access 00:08:15.406 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:15.406 00:08:15.406 Log options: 00:08:15.406 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:15.406 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:15.406 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:15.406 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:15.406 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:08:15.406 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:08:15.406 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:08:15.406 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:15.406 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:08:15.406 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:08:15.406 virtio_vfio_user, vmd) 00:08:15.406 --silence-noticelog disable notice level logging to stderr 00:08:15.406 00:08:15.406 Trace options: 00:08:15.406 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:15.406 setting 0 to disable trace (default 32768) 00:08:15.406 Tracepoints vary in size and can use more than one trace entry. 00:08:15.406 -e, --tpoint-group [:] 00:08:15.406 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:15.406 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:15.406 [2024-07-15 16:57:05.688593] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:15.406 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:08:15.406 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:15.406 a tracepoint group. First tpoint inside a group can be enabled by 00:08:15.406 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:15.406 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:15.406 in /include/spdk_internal/trace_defs.h 00:08:15.406 00:08:15.406 Other options: 00:08:15.406 -h, --help show this usage 00:08:15.406 -v, --version print SPDK version 00:08:15.406 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:15.406 --env-context Opaque context for use of the env implementation 00:08:15.406 00:08:15.406 Application specific: 00:08:15.406 [--------- DD Options ---------] 00:08:15.406 --if Input file. Must specify either --if or --ib. 00:08:15.406 --ib Input bdev. Must specifier either --if or --ib 00:08:15.406 --of Output file. Must specify either --of or --ob. 00:08:15.406 --ob Output bdev. Must specify either --of or --ob. 00:08:15.406 --iflag Input file flags. 00:08:15.406 --oflag Output file flags. 00:08:15.406 --bs I/O unit size (default: 4096) 00:08:15.406 --qd Queue depth (default: 2) 00:08:15.406 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:15.406 --skip Skip this many I/O units at start of input. (default: 0) 00:08:15.406 --seek Skip this many I/O units at start of output. (default: 0) 00:08:15.406 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:15.406 --sparse Enable hole skipping in input target 00:08:15.406 Available iflag and oflag values: 00:08:15.406 append - append mode 00:08:15.406 direct - use direct I/O for data 00:08:15.406 directory - fail unless a directory 00:08:15.406 dsync - use synchronized I/O for data 00:08:15.406 noatime - do not update access time 00:08:15.406 noctty - do not assign controlling terminal from file 00:08:15.406 nofollow - do not follow symlinks 00:08:15.406 nonblock - use non-blocking I/O 00:08:15.406 sync - use synchronized I/O for data and metadata 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:15.666 00:08:15.666 real 0m0.058s 00:08:15.666 user 0m0.038s 00:08:15.666 sys 0m0.019s 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:15.666 ************************************ 00:08:15.666 END TEST dd_invalid_arguments 00:08:15.666 ************************************ 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:15.666 ************************************ 00:08:15.666 START TEST dd_double_input 00:08:15.666 ************************************ 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:15.666 [2024-07-15 16:57:05.801597] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:15.666 00:08:15.666 real 0m0.066s 00:08:15.666 user 0m0.046s 00:08:15.666 sys 0m0.019s 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:15.666 ************************************ 00:08:15.666 END TEST dd_double_input 00:08:15.666 ************************************ 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:15.666 ************************************ 00:08:15.666 START TEST dd_double_output 00:08:15.666 ************************************ 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:15.666 [2024-07-15 16:57:05.919595] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:15.666 00:08:15.666 real 0m0.067s 00:08:15.666 user 0m0.031s 00:08:15.666 sys 0m0.035s 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.666 16:57:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:15.666 ************************************ 00:08:15.666 END TEST dd_double_output 00:08:15.666 ************************************ 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:15.925 ************************************ 00:08:15.925 START TEST dd_no_input 00:08:15.925 ************************************ 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.925 16:57:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:15.925 [2024-07-15 16:57:06.037350] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:15.925 00:08:15.925 real 0m0.072s 00:08:15.925 user 0m0.046s 00:08:15.925 sys 0m0.024s 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:15.925 ************************************ 00:08:15.925 END TEST dd_no_input 00:08:15.925 ************************************ 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:15.925 ************************************ 00:08:15.925 START TEST dd_no_output 00:08:15.925 ************************************ 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.925 [2024-07-15 16:57:06.168236] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:15.925 00:08:15.925 real 0m0.087s 00:08:15.925 user 0m0.059s 00:08:15.925 sys 0m0.026s 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.925 ************************************ 00:08:15.925 END TEST dd_no_output 00:08:15.925 ************************************ 00:08:15.925 16:57:06 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:16.184 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:16.184 16:57:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:16.184 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:16.184 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.184 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:16.184 ************************************ 00:08:16.184 START TEST dd_wrong_blocksize 00:08:16.184 ************************************ 00:08:16.184 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:08:16.184 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:16.184 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:16.185 [2024-07-15 16:57:06.296441] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:16.185 00:08:16.185 real 0m0.067s 00:08:16.185 user 0m0.042s 00:08:16.185 sys 0m0.025s 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:16.185 ************************************ 00:08:16.185 END TEST dd_wrong_blocksize 00:08:16.185 ************************************ 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:16.185 ************************************ 00:08:16.185 START TEST dd_smaller_blocksize 00:08:16.185 ************************************ 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:16.185 16:57:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:16.185 [2024-07-15 16:57:06.417674] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:16.185 [2024-07-15 16:57:06.418254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64534 ] 00:08:16.444 [2024-07-15 16:57:06.556476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.444 [2024-07-15 16:57:06.704422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.703 [2024-07-15 16:57:06.760910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:16.963 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:16.963 [2024-07-15 16:57:07.084540] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:16.963 [2024-07-15 16:57:07.084625] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:16.963 [2024-07-15 16:57:07.201613] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:17.222 16:57:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:08:17.222 16:57:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:17.222 16:57:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:08:17.222 16:57:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:08:17.222 16:57:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:08:17.222 16:57:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:17.222 00:08:17.222 real 0m0.943s 00:08:17.222 user 0m0.447s 00:08:17.223 sys 0m0.386s 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:17.223 ************************************ 00:08:17.223 END TEST dd_smaller_blocksize 00:08:17.223 ************************************ 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:17.223 ************************************ 00:08:17.223 START TEST dd_invalid_count 00:08:17.223 ************************************ 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:17.223 [2024-07-15 16:57:07.410243] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:17.223 00:08:17.223 real 0m0.064s 00:08:17.223 user 0m0.039s 00:08:17.223 sys 0m0.024s 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:17.223 ************************************ 00:08:17.223 END TEST dd_invalid_count 00:08:17.223 ************************************ 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:17.223 ************************************ 00:08:17.223 START TEST dd_invalid_oflag 00:08:17.223 ************************************ 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.223 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:17.482 [2024-07-15 16:57:07.529844] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:17.482 00:08:17.482 real 0m0.071s 00:08:17.482 user 0m0.041s 00:08:17.482 sys 0m0.026s 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:17.482 ************************************ 00:08:17.482 END TEST dd_invalid_oflag 00:08:17.482 ************************************ 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:17.482 ************************************ 00:08:17.482 START TEST dd_invalid_iflag 00:08:17.482 ************************************ 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:17.482 [2024-07-15 16:57:07.654513] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:17.482 00:08:17.482 real 0m0.075s 00:08:17.482 user 0m0.050s 00:08:17.482 sys 0m0.025s 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:17.482 ************************************ 00:08:17.482 END TEST dd_invalid_iflag 00:08:17.482 ************************************ 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:17.482 ************************************ 00:08:17.482 START TEST dd_unknown_flag 00:08:17.482 ************************************ 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.482 16:57:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:17.741 [2024-07-15 16:57:07.783669] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:17.741 [2024-07-15 16:57:07.783765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64631 ] 00:08:17.741 [2024-07-15 16:57:07.924430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.999 [2024-07-15 16:57:08.060620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.999 [2024-07-15 16:57:08.121043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:17.999 [2024-07-15 16:57:08.157959] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:17.999 [2024-07-15 16:57:08.158025] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.999 [2024-07-15 16:57:08.158102] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:17.999 [2024-07-15 16:57:08.158120] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.999 [2024-07-15 16:57:08.158396] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:17.999 [2024-07-15 16:57:08.158417] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.999 [2024-07-15 16:57:08.158473] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:17.999 [2024-07-15 16:57:08.158487] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:17.999 [2024-07-15 16:57:08.275634] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:18.258 00:08:18.258 real 0m0.658s 00:08:18.258 user 0m0.388s 00:08:18.258 sys 0m0.171s 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:18.258 ************************************ 00:08:18.258 END TEST dd_unknown_flag 00:08:18.258 ************************************ 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:18.258 ************************************ 00:08:18.258 START TEST dd_invalid_json 00:08:18.258 ************************************ 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.258 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:18.258 [2024-07-15 16:57:08.491198] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:18.258 [2024-07-15 16:57:08.491316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64660 ] 00:08:18.517 [2024-07-15 16:57:08.625944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.517 [2024-07-15 16:57:08.740007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.517 [2024-07-15 16:57:08.740136] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:18.517 [2024-07-15 16:57:08.740155] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:18.517 [2024-07-15 16:57:08.740165] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.517 [2024-07-15 16:57:08.740205] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:18.775 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:08:18.775 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:18.775 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:08:18.775 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:08:18.775 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:08:18.775 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:18.775 ************************************ 00:08:18.775 END TEST dd_invalid_json 00:08:18.775 ************************************ 00:08:18.775 00:08:18.775 real 0m0.413s 00:08:18.775 user 0m0.240s 00:08:18.775 sys 0m0.071s 00:08:18.775 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.775 16:57:08 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:18.775 16:57:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:18.775 00:08:18.775 real 0m3.334s 00:08:18.775 user 0m1.689s 00:08:18.775 sys 0m1.277s 00:08:18.775 16:57:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.775 ************************************ 00:08:18.775 END TEST spdk_dd_negative 00:08:18.775 16:57:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:18.775 ************************************ 00:08:18.775 16:57:08 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:18.775 ************************************ 00:08:18.775 END TEST spdk_dd 00:08:18.775 ************************************ 00:08:18.775 00:08:18.775 real 1m20.587s 00:08:18.775 user 0m53.159s 00:08:18.775 sys 0m33.751s 00:08:18.775 16:57:08 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.775 16:57:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:18.775 16:57:08 -- common/autotest_common.sh@1142 -- # return 0 00:08:18.775 16:57:08 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:18.775 16:57:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:18.775 16:57:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:18.775 16:57:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.775 16:57:08 -- common/autotest_common.sh@10 -- # set +x 00:08:18.775 16:57:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:18.775 16:57:08 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:18.775 16:57:08 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:18.775 16:57:08 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:18.775 16:57:08 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:18.775 16:57:08 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:18.775 16:57:09 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:18.775 16:57:09 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:18.775 16:57:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.775 16:57:09 -- common/autotest_common.sh@10 -- # set +x 00:08:18.775 ************************************ 00:08:18.775 START TEST nvmf_tcp 00:08:18.775 ************************************ 00:08:18.775 16:57:09 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:19.036 * Looking for test storage... 00:08:19.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:19.036 16:57:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:19.036 16:57:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:19.036 16:57:09 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:19.036 16:57:09 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:19.036 16:57:09 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:19.037 16:57:09 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.037 16:57:09 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.037 16:57:09 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.037 16:57:09 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.037 16:57:09 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.037 16:57:09 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.037 16:57:09 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:19.037 16:57:09 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:19.037 16:57:09 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.037 16:57:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:19.037 16:57:09 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:19.037 16:57:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:19.037 16:57:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.037 16:57:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.037 ************************************ 00:08:19.037 START TEST nvmf_host_management 00:08:19.037 ************************************ 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:19.037 * Looking for test storage... 00:08:19.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.037 16:57:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:19.038 Cannot find device "nvmf_init_br" 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:19.038 Cannot find device "nvmf_tgt_br" 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:19.038 Cannot find device "nvmf_tgt_br2" 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:19.038 Cannot find device "nvmf_init_br" 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:19.038 Cannot find device "nvmf_tgt_br" 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:19.038 Cannot find device "nvmf_tgt_br2" 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:19.038 Cannot find device "nvmf_br" 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:08:19.038 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:19.038 Cannot find device "nvmf_init_if" 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:19.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:19.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:19.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:08:19.331 00:08:19.331 --- 10.0.0.2 ping statistics --- 00:08:19.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.331 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:19.331 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:19.331 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:19.331 00:08:19.331 --- 10.0.0.3 ping statistics --- 00:08:19.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.331 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:19.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:19.331 00:08:19.331 --- 10.0.0.1 ping statistics --- 00:08:19.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.331 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.331 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64919 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64919 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 64919 ']' 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:19.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.590 16:57:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.590 [2024-07-15 16:57:09.704381] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:19.590 [2024-07-15 16:57:09.704496] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.590 [2024-07-15 16:57:09.847328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.849 [2024-07-15 16:57:09.980500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.849 [2024-07-15 16:57:09.980551] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.849 [2024-07-15 16:57:09.980565] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.849 [2024-07-15 16:57:09.980575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.849 [2024-07-15 16:57:09.980585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.849 [2024-07-15 16:57:09.980756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.849 [2024-07-15 16:57:09.981421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.849 [2024-07-15 16:57:09.981565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:19.849 [2024-07-15 16:57:09.981572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.849 [2024-07-15 16:57:10.039058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.785 [2024-07-15 16:57:10.803598] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.785 Malloc0 00:08:20.785 [2024-07-15 16:57:10.888115] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64976 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64976 /var/tmp/bdevperf.sock 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 64976 ']' 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:20.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:20.785 { 00:08:20.785 "params": { 00:08:20.785 "name": "Nvme$subsystem", 00:08:20.785 "trtype": "$TEST_TRANSPORT", 00:08:20.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.785 "adrfam": "ipv4", 00:08:20.785 "trsvcid": "$NVMF_PORT", 00:08:20.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.785 "hdgst": ${hdgst:-false}, 00:08:20.785 "ddgst": ${ddgst:-false} 00:08:20.785 }, 00:08:20.785 "method": "bdev_nvme_attach_controller" 00:08:20.785 } 00:08:20.785 EOF 00:08:20.785 )") 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:20.785 16:57:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:20.785 "params": { 00:08:20.785 "name": "Nvme0", 00:08:20.785 "trtype": "tcp", 00:08:20.785 "traddr": "10.0.0.2", 00:08:20.785 "adrfam": "ipv4", 00:08:20.785 "trsvcid": "4420", 00:08:20.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:20.785 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:20.785 "hdgst": false, 00:08:20.785 "ddgst": false 00:08:20.785 }, 00:08:20.785 "method": "bdev_nvme_attach_controller" 00:08:20.785 }' 00:08:20.785 [2024-07-15 16:57:10.977933] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:20.785 [2024-07-15 16:57:10.978009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64976 ] 00:08:21.044 [2024-07-15 16:57:11.112940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.044 [2024-07-15 16:57:11.232585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.044 [2024-07-15 16:57:11.294523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:21.303 Running I/O for 10 seconds... 00:08:21.873 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.874 [2024-07-15 16:57:12.082779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.874 [2024-07-15 16:57:12.082825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.082840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.874 [2024-07-15 16:57:12.082850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.082861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.874 [2024-07-15 16:57:12.082870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.082880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.874 [2024-07-15 16:57:12.082889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.082899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ded50 is same with the state(5) to be set 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.874 16:57:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:21.874 [2024-07-15 16:57:12.104149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.874 [2024-07-15 16:57:12.104729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.874 [2024-07-15 16:57:12.104738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.104982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.104993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.875 [2024-07-15 16:57:12.105520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.875 [2024-07-15 16:57:12.105531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6ec0 is same with the state(5) to be set 00:08:21.875 [2024-07-15 16:57:12.105602] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6e6ec0 was disconnected and freed. reset controller. 00:08:21.875 [2024-07-15 16:57:12.105684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ded50 (9): Bad file descriptor 00:08:21.875 task offset: 0 on job bdev=Nvme0n1 fails 00:08:21.875 00:08:21.875 Latency(us) 00:08:21.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.875 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:21.875 Job: Nvme0n1 ended in about 0.69 seconds with error 00:08:21.875 Verification LBA range: start 0x0 length 0x400 00:08:21.875 Nvme0n1 : 0.69 1476.72 92.29 92.29 0.00 39791.11 1899.05 38606.66 00:08:21.875 =================================================================================================================== 00:08:21.875 Total : 1476.72 92.29 92.29 0.00 39791.11 1899.05 38606.66 00:08:21.875 [2024-07-15 16:57:12.106761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:21.876 [2024-07-15 16:57:12.108854] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.876 [2024-07-15 16:57:12.120546] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.809 16:57:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64976 00:08:22.809 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64976) - No such process 00:08:22.809 16:57:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:22.809 16:57:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:22.809 16:57:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:22.809 16:57:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:22.809 16:57:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:22.809 16:57:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:22.809 16:57:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:22.809 16:57:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:22.809 { 00:08:22.809 "params": { 00:08:22.809 "name": "Nvme$subsystem", 00:08:22.809 "trtype": "$TEST_TRANSPORT", 00:08:22.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.809 "adrfam": "ipv4", 00:08:22.809 "trsvcid": "$NVMF_PORT", 00:08:22.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.809 "hdgst": ${hdgst:-false}, 00:08:22.809 "ddgst": ${ddgst:-false} 00:08:22.809 }, 00:08:22.809 "method": "bdev_nvme_attach_controller" 00:08:22.809 } 00:08:22.809 EOF 00:08:22.809 )") 00:08:22.809 16:57:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:23.067 16:57:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:23.067 16:57:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:23.067 16:57:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:23.067 "params": { 00:08:23.067 "name": "Nvme0", 00:08:23.067 "trtype": "tcp", 00:08:23.067 "traddr": "10.0.0.2", 00:08:23.067 "adrfam": "ipv4", 00:08:23.067 "trsvcid": "4420", 00:08:23.067 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:23.067 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:23.067 "hdgst": false, 00:08:23.067 "ddgst": false 00:08:23.067 }, 00:08:23.067 "method": "bdev_nvme_attach_controller" 00:08:23.067 }' 00:08:23.067 [2024-07-15 16:57:13.155169] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:23.067 [2024-07-15 16:57:13.155254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65020 ] 00:08:23.067 [2024-07-15 16:57:13.293706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.325 [2024-07-15 16:57:13.415985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.325 [2024-07-15 16:57:13.481465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:23.325 Running I/O for 1 seconds... 00:08:24.757 00:08:24.757 Latency(us) 00:08:24.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.758 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:24.758 Verification LBA range: start 0x0 length 0x400 00:08:24.758 Nvme0n1 : 1.02 1502.74 93.92 0.00 0.00 41744.07 4438.57 39083.29 00:08:24.758 =================================================================================================================== 00:08:24.758 Total : 1502.74 93.92 0.00 0.00 41744.07 4438.57 39083.29 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:24.758 rmmod nvme_tcp 00:08:24.758 rmmod nvme_fabrics 00:08:24.758 rmmod nvme_keyring 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64919 ']' 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64919 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 64919 ']' 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 64919 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64919 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:24.758 killing process with pid 64919 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64919' 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 64919 00:08:24.758 16:57:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 64919 00:08:25.016 [2024-07-15 16:57:15.193544] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:25.016 ************************************ 00:08:25.016 END TEST nvmf_host_management 00:08:25.016 ************************************ 00:08:25.016 00:08:25.016 real 0m6.120s 00:08:25.016 user 0m23.744s 00:08:25.016 sys 0m1.536s 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.016 16:57:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.016 16:57:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:25.016 16:57:15 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:25.016 16:57:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.016 16:57:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.016 16:57:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.016 ************************************ 00:08:25.016 START TEST nvmf_lvol 00:08:25.016 ************************************ 00:08:25.016 16:57:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:25.275 * Looking for test storage... 00:08:25.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:25.275 Cannot find device "nvmf_tgt_br" 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:25.275 Cannot find device "nvmf_tgt_br2" 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:25.275 Cannot find device "nvmf_tgt_br" 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:25.275 Cannot find device "nvmf_tgt_br2" 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:25.275 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:25.276 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:25.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:25.276 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:25.276 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:25.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:25.276 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:25.276 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:25.276 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:25.276 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:25.276 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:25.276 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:25.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:08:25.533 00:08:25.533 --- 10.0.0.2 ping statistics --- 00:08:25.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.533 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:25.533 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:25.533 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:25.533 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:08:25.533 00:08:25.533 --- 10.0.0.3 ping statistics --- 00:08:25.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.534 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:25.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:25.534 00:08:25.534 --- 10.0.0.1 ping statistics --- 00:08:25.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.534 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65232 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65232 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65232 ']' 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.534 16:57:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:25.534 [2024-07-15 16:57:15.791398] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:25.534 [2024-07-15 16:57:15.791482] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.791 [2024-07-15 16:57:15.928084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.791 [2024-07-15 16:57:16.045314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.791 [2024-07-15 16:57:16.045392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.791 [2024-07-15 16:57:16.045406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.791 [2024-07-15 16:57:16.045415] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.791 [2024-07-15 16:57:16.045424] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.791 [2024-07-15 16:57:16.045506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.791 [2024-07-15 16:57:16.045761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.791 [2024-07-15 16:57:16.045771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.049 [2024-07-15 16:57:16.099692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.614 16:57:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.614 16:57:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:26.614 16:57:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.614 16:57:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.614 16:57:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:26.614 16:57:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.614 16:57:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:26.871 [2024-07-15 16:57:17.084209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.871 16:57:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:27.129 16:57:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:27.129 16:57:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:27.386 16:57:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:27.386 16:57:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:27.644 16:57:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:28.209 16:57:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=eb2403b5-8b2e-499b-886c-b91e9352119e 00:08:28.209 16:57:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u eb2403b5-8b2e-499b-886c-b91e9352119e lvol 20 00:08:28.209 16:57:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=42093701-6c44-4f0e-8832-8dd116a5577a 00:08:28.209 16:57:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.465 16:57:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 42093701-6c44-4f0e-8832-8dd116a5577a 00:08:29.030 16:57:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:29.287 [2024-07-15 16:57:19.446185] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.287 16:57:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.545 16:57:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65313 00:08:29.545 16:57:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:29.545 16:57:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:30.915 16:57:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 42093701-6c44-4f0e-8832-8dd116a5577a MY_SNAPSHOT 00:08:30.915 16:57:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1f8452ab-d778-4551-a40d-04379ad69de6 00:08:30.915 16:57:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 42093701-6c44-4f0e-8832-8dd116a5577a 30 00:08:31.172 16:57:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1f8452ab-d778-4551-a40d-04379ad69de6 MY_CLONE 00:08:31.737 16:57:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5a4ba207-3e1a-41fe-835a-86b85e57674e 00:08:31.737 16:57:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 5a4ba207-3e1a-41fe-835a-86b85e57674e 00:08:32.000 16:57:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65313 00:08:40.126 Initializing NVMe Controllers 00:08:40.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:40.126 Controller IO queue size 128, less than required. 00:08:40.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:40.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:40.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:40.126 Initialization complete. Launching workers. 00:08:40.126 ======================================================== 00:08:40.126 Latency(us) 00:08:40.126 Device Information : IOPS MiB/s Average min max 00:08:40.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10033.80 39.19 12765.20 1914.69 51572.62 00:08:40.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10156.60 39.67 12607.13 567.41 95175.57 00:08:40.126 ======================================================== 00:08:40.126 Total : 20190.40 78.87 12685.68 567.41 95175.57 00:08:40.126 00:08:40.126 16:57:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.385 16:57:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 42093701-6c44-4f0e-8832-8dd116a5577a 00:08:40.385 16:57:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb2403b5-8b2e-499b-886c-b91e9352119e 00:08:40.951 16:57:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:40.951 16:57:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:40.951 16:57:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:40.951 16:57:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.951 16:57:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.951 rmmod nvme_tcp 00:08:40.951 rmmod nvme_fabrics 00:08:40.951 rmmod nvme_keyring 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65232 ']' 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65232 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65232 ']' 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65232 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65232 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65232' 00:08:40.951 killing process with pid 65232 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65232 00:08:40.951 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65232 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:41.209 00:08:41.209 real 0m16.110s 00:08:41.209 user 1m6.789s 00:08:41.209 sys 0m4.482s 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 ************************************ 00:08:41.209 END TEST nvmf_lvol 00:08:41.209 ************************************ 00:08:41.209 16:57:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:41.209 16:57:31 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:41.209 16:57:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:41.209 16:57:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.209 16:57:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.209 ************************************ 00:08:41.209 START TEST nvmf_lvs_grow 00:08:41.209 ************************************ 00:08:41.209 16:57:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:41.468 * Looking for test storage... 00:08:41.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.468 16:57:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:41.469 Cannot find device "nvmf_tgt_br" 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.469 Cannot find device "nvmf_tgt_br2" 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:41.469 Cannot find device "nvmf_tgt_br" 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:41.469 Cannot find device "nvmf_tgt_br2" 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:41.469 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:41.727 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:41.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:08:41.728 00:08:41.728 --- 10.0.0.2 ping statistics --- 00:08:41.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.728 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:41.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:41.728 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:08:41.728 00:08:41.728 --- 10.0.0.3 ping statistics --- 00:08:41.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.728 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:41.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:41.728 00:08:41.728 --- 10.0.0.1 ping statistics --- 00:08:41.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.728 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65637 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65637 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65637 ']' 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.728 16:57:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.728 [2024-07-15 16:57:31.967535] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:41.728 [2024-07-15 16:57:31.967634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.986 [2024-07-15 16:57:32.105474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.986 [2024-07-15 16:57:32.222509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.986 [2024-07-15 16:57:32.222556] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.986 [2024-07-15 16:57:32.222568] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.986 [2024-07-15 16:57:32.222576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.986 [2024-07-15 16:57:32.222584] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.986 [2024-07-15 16:57:32.222607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.986 [2024-07-15 16:57:32.275712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:42.922 16:57:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.922 16:57:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:42.922 16:57:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.922 16:57:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.922 16:57:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:42.922 16:57:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.922 16:57:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:43.179 [2024-07-15 16:57:33.261998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.179 ************************************ 00:08:43.179 START TEST lvs_grow_clean 00:08:43.179 ************************************ 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:43.179 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.439 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:43.439 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:43.708 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a200b8e7-6914-4c5e-b3b7-03e163c33013 00:08:43.708 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a200b8e7-6914-4c5e-b3b7-03e163c33013 00:08:43.708 16:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:43.983 16:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:43.983 16:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:43.983 16:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a200b8e7-6914-4c5e-b3b7-03e163c33013 lvol 150 00:08:44.242 16:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=66efcc59-5ab1-439a-8657-97c7d32b86f0 00:08:44.242 16:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:44.242 16:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:44.502 [2024-07-15 16:57:34.573194] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:44.502 [2024-07-15 16:57:34.573286] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:44.502 true 00:08:44.502 16:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a200b8e7-6914-4c5e-b3b7-03e163c33013 00:08:44.502 16:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:44.761 16:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:44.761 16:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.020 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 66efcc59-5ab1-439a-8657-97c7d32b86f0 00:08:45.020 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:45.278 [2024-07-15 16:57:35.533761] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.278 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.849 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65725 00:08:45.849 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:45.849 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.849 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65725 /var/tmp/bdevperf.sock 00:08:45.849 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65725 ']' 00:08:45.849 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.849 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.849 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.849 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.849 16:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:45.849 [2024-07-15 16:57:35.904032] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:45.849 [2024-07-15 16:57:35.904149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65725 ] 00:08:45.849 [2024-07-15 16:57:36.049537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.110 [2024-07-15 16:57:36.167302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.110 [2024-07-15 16:57:36.223163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:46.691 16:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.691 16:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:46.691 16:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:46.962 Nvme0n1 00:08:46.962 16:57:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:47.235 [ 00:08:47.235 { 00:08:47.235 "name": "Nvme0n1", 00:08:47.235 "aliases": [ 00:08:47.235 "66efcc59-5ab1-439a-8657-97c7d32b86f0" 00:08:47.235 ], 00:08:47.235 "product_name": "NVMe disk", 00:08:47.235 "block_size": 4096, 00:08:47.235 "num_blocks": 38912, 00:08:47.235 "uuid": "66efcc59-5ab1-439a-8657-97c7d32b86f0", 00:08:47.235 "assigned_rate_limits": { 00:08:47.235 "rw_ios_per_sec": 0, 00:08:47.235 "rw_mbytes_per_sec": 0, 00:08:47.235 "r_mbytes_per_sec": 0, 00:08:47.235 "w_mbytes_per_sec": 0 00:08:47.235 }, 00:08:47.235 "claimed": false, 00:08:47.235 "zoned": false, 00:08:47.235 "supported_io_types": { 00:08:47.235 "read": true, 00:08:47.235 "write": true, 00:08:47.235 "unmap": true, 00:08:47.235 "flush": true, 00:08:47.235 "reset": true, 00:08:47.235 "nvme_admin": true, 00:08:47.235 "nvme_io": true, 00:08:47.235 "nvme_io_md": false, 00:08:47.235 "write_zeroes": true, 00:08:47.235 "zcopy": false, 00:08:47.235 "get_zone_info": false, 00:08:47.235 "zone_management": false, 00:08:47.235 "zone_append": false, 00:08:47.235 "compare": true, 00:08:47.235 "compare_and_write": true, 00:08:47.235 "abort": true, 00:08:47.235 "seek_hole": false, 00:08:47.235 "seek_data": false, 00:08:47.235 "copy": true, 00:08:47.235 "nvme_iov_md": false 00:08:47.235 }, 00:08:47.235 "memory_domains": [ 00:08:47.235 { 00:08:47.235 "dma_device_id": "system", 00:08:47.235 "dma_device_type": 1 00:08:47.235 } 00:08:47.235 ], 00:08:47.235 "driver_specific": { 00:08:47.235 "nvme": [ 00:08:47.235 { 00:08:47.235 "trid": { 00:08:47.235 "trtype": "TCP", 00:08:47.235 "adrfam": "IPv4", 00:08:47.235 "traddr": "10.0.0.2", 00:08:47.235 "trsvcid": "4420", 00:08:47.235 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:47.235 }, 00:08:47.235 "ctrlr_data": { 00:08:47.235 "cntlid": 1, 00:08:47.235 "vendor_id": "0x8086", 00:08:47.235 "model_number": "SPDK bdev Controller", 00:08:47.235 "serial_number": "SPDK0", 00:08:47.235 "firmware_revision": "24.09", 00:08:47.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:47.235 "oacs": { 00:08:47.235 "security": 0, 00:08:47.235 "format": 0, 00:08:47.235 "firmware": 0, 00:08:47.235 "ns_manage": 0 00:08:47.235 }, 00:08:47.235 "multi_ctrlr": true, 00:08:47.235 "ana_reporting": false 00:08:47.235 }, 00:08:47.235 "vs": { 00:08:47.235 "nvme_version": "1.3" 00:08:47.235 }, 00:08:47.235 "ns_data": { 00:08:47.235 "id": 1, 00:08:47.235 "can_share": true 00:08:47.235 } 00:08:47.235 } 00:08:47.235 ], 00:08:47.235 "mp_policy": "active_passive" 00:08:47.235 } 00:08:47.235 } 00:08:47.235 ] 00:08:47.235 16:57:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65743 00:08:47.235 16:57:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:47.235 16:57:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:47.509 Running I/O for 10 seconds... 00:08:48.465 Latency(us) 00:08:48.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.465 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:48.465 =================================================================================================================== 00:08:48.465 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:48.465 00:08:49.440 16:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a200b8e7-6914-4c5e-b3b7-03e163c33013 00:08:49.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.440 Nvme0n1 : 2.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:49.440 =================================================================================================================== 00:08:49.440 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:49.440 00:08:49.708 true 00:08:49.708 16:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a200b8e7-6914-4c5e-b3b7-03e163c33013 00:08:49.708 16:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:49.966 16:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:49.966 16:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:49.966 16:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65743 00:08:50.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.530 Nvme0n1 : 3.00 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:50.530 =================================================================================================================== 00:08:50.531 Total : 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:50.531 00:08:51.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.464 Nvme0n1 : 4.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:51.464 =================================================================================================================== 00:08:51.464 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:51.464 00:08:52.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.398 Nvme0n1 : 5.00 7467.60 29.17 0.00 0.00 0.00 0.00 0.00 00:08:52.398 =================================================================================================================== 00:08:52.398 Total : 7467.60 29.17 0.00 0.00 0.00 0.00 0.00 00:08:52.398 00:08:53.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.332 Nvme0n1 : 6.00 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:53.332 =================================================================================================================== 00:08:53.332 Total : 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:53.332 00:08:54.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.266 Nvme0n1 : 7.00 7402.29 28.92 0.00 0.00 0.00 0.00 0.00 00:08:54.266 =================================================================================================================== 00:08:54.266 Total : 7402.29 28.92 0.00 0.00 0.00 0.00 0.00 00:08:54.266 00:08:55.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.642 Nvme0n1 : 8.00 7350.12 28.71 0.00 0.00 0.00 0.00 0.00 00:08:55.642 =================================================================================================================== 00:08:55.642 Total : 7350.12 28.71 0.00 0.00 0.00 0.00 0.00 00:08:55.642 00:08:56.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.577 Nvme0n1 : 9.00 7337.78 28.66 0.00 0.00 0.00 0.00 0.00 00:08:56.577 =================================================================================================================== 00:08:56.577 Total : 7337.78 28.66 0.00 0.00 0.00 0.00 0.00 00:08:56.577 00:08:57.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.513 Nvme0n1 : 10.00 7315.20 28.57 0.00 0.00 0.00 0.00 0.00 00:08:57.513 =================================================================================================================== 00:08:57.513 Total : 7315.20 28.57 0.00 0.00 0.00 0.00 0.00 00:08:57.513 00:08:57.513 00:08:57.513 Latency(us) 00:08:57.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.513 Nvme0n1 : 10.02 7316.60 28.58 0.00 0.00 17488.46 13702.98 39083.29 00:08:57.513 =================================================================================================================== 00:08:57.513 Total : 7316.60 28.58 0.00 0.00 17488.46 13702.98 39083.29 00:08:57.513 0 00:08:57.513 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65725 00:08:57.513 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65725 ']' 00:08:57.513 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65725 00:08:57.513 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:57.513 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.513 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65725 00:08:57.514 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:57.514 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:57.514 killing process with pid 65725 00:08:57.514 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65725' 00:08:57.514 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65725 00:08:57.514 Received shutdown signal, test time was about 10.000000 seconds 00:08:57.514 00:08:57.514 Latency(us) 00:08:57.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.514 =================================================================================================================== 00:08:57.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:57.514 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65725 00:08:57.772 16:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:58.031 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:58.290 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a200b8e7-6914-4c5e-b3b7-03e163c33013 00:08:58.290 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:58.548 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:58.548 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:58.548 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.806 [2024-07-15 16:57:48.879010] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a200b8e7-6914-4c5e-b3b7-03e163c33013 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a200b8e7-6914-4c5e-b3b7-03e163c33013 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:58.806 16:57:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a200b8e7-6914-4c5e-b3b7-03e163c33013 00:08:59.065 request: 00:08:59.065 { 00:08:59.065 "uuid": "a200b8e7-6914-4c5e-b3b7-03e163c33013", 00:08:59.065 "method": "bdev_lvol_get_lvstores", 00:08:59.065 "req_id": 1 00:08:59.065 } 00:08:59.065 Got JSON-RPC error response 00:08:59.065 response: 00:08:59.065 { 00:08:59.065 "code": -19, 00:08:59.065 "message": "No such device" 00:08:59.065 } 00:08:59.065 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:59.065 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:59.065 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:59.065 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:59.065 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.324 aio_bdev 00:08:59.324 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 66efcc59-5ab1-439a-8657-97c7d32b86f0 00:08:59.324 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=66efcc59-5ab1-439a-8657-97c7d32b86f0 00:08:59.324 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:59.324 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:59.324 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:59.324 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:59.324 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:59.586 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 66efcc59-5ab1-439a-8657-97c7d32b86f0 -t 2000 00:08:59.844 [ 00:08:59.844 { 00:08:59.844 "name": "66efcc59-5ab1-439a-8657-97c7d32b86f0", 00:08:59.844 "aliases": [ 00:08:59.844 "lvs/lvol" 00:08:59.844 ], 00:08:59.844 "product_name": "Logical Volume", 00:08:59.844 "block_size": 4096, 00:08:59.844 "num_blocks": 38912, 00:08:59.844 "uuid": "66efcc59-5ab1-439a-8657-97c7d32b86f0", 00:08:59.844 "assigned_rate_limits": { 00:08:59.844 "rw_ios_per_sec": 0, 00:08:59.844 "rw_mbytes_per_sec": 0, 00:08:59.844 "r_mbytes_per_sec": 0, 00:08:59.844 "w_mbytes_per_sec": 0 00:08:59.844 }, 00:08:59.844 "claimed": false, 00:08:59.844 "zoned": false, 00:08:59.844 "supported_io_types": { 00:08:59.844 "read": true, 00:08:59.844 "write": true, 00:08:59.844 "unmap": true, 00:08:59.844 "flush": false, 00:08:59.844 "reset": true, 00:08:59.844 "nvme_admin": false, 00:08:59.844 "nvme_io": false, 00:08:59.844 "nvme_io_md": false, 00:08:59.844 "write_zeroes": true, 00:08:59.844 "zcopy": false, 00:08:59.844 "get_zone_info": false, 00:08:59.844 "zone_management": false, 00:08:59.844 "zone_append": false, 00:08:59.844 "compare": false, 00:08:59.844 "compare_and_write": false, 00:08:59.844 "abort": false, 00:08:59.844 "seek_hole": true, 00:08:59.844 "seek_data": true, 00:08:59.844 "copy": false, 00:08:59.844 "nvme_iov_md": false 00:08:59.844 }, 00:08:59.844 "driver_specific": { 00:08:59.844 "lvol": { 00:08:59.844 "lvol_store_uuid": "a200b8e7-6914-4c5e-b3b7-03e163c33013", 00:08:59.844 "base_bdev": "aio_bdev", 00:08:59.844 "thin_provision": false, 00:08:59.844 "num_allocated_clusters": 38, 00:08:59.844 "snapshot": false, 00:08:59.844 "clone": false, 00:08:59.844 "esnap_clone": false 00:08:59.844 } 00:08:59.844 } 00:08:59.844 } 00:08:59.844 ] 00:08:59.844 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:59.844 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a200b8e7-6914-4c5e-b3b7-03e163c33013 00:08:59.844 16:57:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:00.101 16:57:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:00.101 16:57:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a200b8e7-6914-4c5e-b3b7-03e163c33013 00:09:00.101 16:57:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:00.419 16:57:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:00.419 16:57:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 66efcc59-5ab1-439a-8657-97c7d32b86f0 00:09:00.692 16:57:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a200b8e7-6914-4c5e-b3b7-03e163c33013 00:09:00.950 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:01.207 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.463 ************************************ 00:09:01.463 END TEST lvs_grow_clean 00:09:01.463 ************************************ 00:09:01.463 00:09:01.463 real 0m18.382s 00:09:01.463 user 0m17.284s 00:09:01.463 sys 0m2.609s 00:09:01.463 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.463 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:01.463 16:57:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.464 ************************************ 00:09:01.464 START TEST lvs_grow_dirty 00:09:01.464 ************************************ 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.464 16:57:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:02.029 16:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:02.029 16:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:02.029 16:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:02.029 16:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:02.029 16:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:02.286 16:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:02.286 16:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:02.286 16:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 lvol 150 00:09:02.544 16:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6a2277b8-3df8-4afe-bdbf-31608a7bf637 00:09:02.544 16:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:02.544 16:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:02.814 [2024-07-15 16:57:53.039275] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:02.814 [2024-07-15 16:57:53.039376] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:02.814 true 00:09:02.814 16:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:02.814 16:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:03.379 16:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:03.379 16:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:03.379 16:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6a2277b8-3df8-4afe-bdbf-31608a7bf637 00:09:03.639 16:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:04.205 [2024-07-15 16:57:54.199870] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.205 16:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:04.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:04.205 16:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65995 00:09:04.205 16:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:04.205 16:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:04.205 16:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65995 /var/tmp/bdevperf.sock 00:09:04.205 16:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 65995 ']' 00:09:04.205 16:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:04.205 16:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.205 16:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:04.205 16:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.205 16:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:04.463 [2024-07-15 16:57:54.518819] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:04.463 [2024-07-15 16:57:54.518912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65995 ] 00:09:04.463 [2024-07-15 16:57:54.662374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.720 [2024-07-15 16:57:54.780855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.720 [2024-07-15 16:57:54.834872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:05.285 16:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.285 16:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:05.286 16:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:05.542 Nvme0n1 00:09:05.542 16:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:05.801 [ 00:09:05.801 { 00:09:05.801 "name": "Nvme0n1", 00:09:05.801 "aliases": [ 00:09:05.801 "6a2277b8-3df8-4afe-bdbf-31608a7bf637" 00:09:05.801 ], 00:09:05.801 "product_name": "NVMe disk", 00:09:05.801 "block_size": 4096, 00:09:05.801 "num_blocks": 38912, 00:09:05.801 "uuid": "6a2277b8-3df8-4afe-bdbf-31608a7bf637", 00:09:05.801 "assigned_rate_limits": { 00:09:05.801 "rw_ios_per_sec": 0, 00:09:05.801 "rw_mbytes_per_sec": 0, 00:09:05.801 "r_mbytes_per_sec": 0, 00:09:05.801 "w_mbytes_per_sec": 0 00:09:05.801 }, 00:09:05.801 "claimed": false, 00:09:05.801 "zoned": false, 00:09:05.801 "supported_io_types": { 00:09:05.801 "read": true, 00:09:05.801 "write": true, 00:09:05.801 "unmap": true, 00:09:05.801 "flush": true, 00:09:05.801 "reset": true, 00:09:05.801 "nvme_admin": true, 00:09:05.801 "nvme_io": true, 00:09:05.801 "nvme_io_md": false, 00:09:05.801 "write_zeroes": true, 00:09:05.801 "zcopy": false, 00:09:05.801 "get_zone_info": false, 00:09:05.801 "zone_management": false, 00:09:05.801 "zone_append": false, 00:09:05.801 "compare": true, 00:09:05.801 "compare_and_write": true, 00:09:05.801 "abort": true, 00:09:05.801 "seek_hole": false, 00:09:05.801 "seek_data": false, 00:09:05.801 "copy": true, 00:09:05.801 "nvme_iov_md": false 00:09:05.801 }, 00:09:05.801 "memory_domains": [ 00:09:05.801 { 00:09:05.801 "dma_device_id": "system", 00:09:05.801 "dma_device_type": 1 00:09:05.801 } 00:09:05.801 ], 00:09:05.801 "driver_specific": { 00:09:05.801 "nvme": [ 00:09:05.801 { 00:09:05.801 "trid": { 00:09:05.801 "trtype": "TCP", 00:09:05.801 "adrfam": "IPv4", 00:09:05.801 "traddr": "10.0.0.2", 00:09:05.801 "trsvcid": "4420", 00:09:05.801 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:05.801 }, 00:09:05.801 "ctrlr_data": { 00:09:05.801 "cntlid": 1, 00:09:05.801 "vendor_id": "0x8086", 00:09:05.801 "model_number": "SPDK bdev Controller", 00:09:05.801 "serial_number": "SPDK0", 00:09:05.801 "firmware_revision": "24.09", 00:09:05.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:05.801 "oacs": { 00:09:05.801 "security": 0, 00:09:05.801 "format": 0, 00:09:05.801 "firmware": 0, 00:09:05.801 "ns_manage": 0 00:09:05.801 }, 00:09:05.801 "multi_ctrlr": true, 00:09:05.801 "ana_reporting": false 00:09:05.801 }, 00:09:05.801 "vs": { 00:09:05.801 "nvme_version": "1.3" 00:09:05.801 }, 00:09:05.801 "ns_data": { 00:09:05.801 "id": 1, 00:09:05.801 "can_share": true 00:09:05.801 } 00:09:05.801 } 00:09:05.801 ], 00:09:05.801 "mp_policy": "active_passive" 00:09:05.801 } 00:09:05.801 } 00:09:05.801 ] 00:09:05.801 16:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:05.801 16:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66024 00:09:05.801 16:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:06.059 Running I/O for 10 seconds... 00:09:07.034 Latency(us) 00:09:07.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.034 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:07.034 =================================================================================================================== 00:09:07.034 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:07.034 00:09:07.968 16:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:07.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.968 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:07.968 =================================================================================================================== 00:09:07.968 Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:07.968 00:09:07.968 true 00:09:08.226 16:57:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:08.226 16:57:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:08.483 16:57:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:08.483 16:57:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:08.483 16:57:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66024 00:09:09.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.049 Nvme0n1 : 3.00 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:09:09.049 =================================================================================================================== 00:09:09.049 Total : 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:09:09.049 00:09:09.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.984 Nvme0n1 : 4.00 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:09:09.984 =================================================================================================================== 00:09:09.984 Total : 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:09:09.984 00:09:10.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.920 Nvme0n1 : 5.00 7061.20 27.58 0.00 0.00 0.00 0.00 0.00 00:09:10.920 =================================================================================================================== 00:09:10.920 Total : 7061.20 27.58 0.00 0.00 0.00 0.00 0.00 00:09:10.920 00:09:11.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.856 Nvme0n1 : 6.00 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:09:11.856 =================================================================================================================== 00:09:11.856 Total : 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:09:11.856 00:09:13.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.231 Nvme0n1 : 7.00 6679.00 26.09 0.00 0.00 0.00 0.00 0.00 00:09:13.231 =================================================================================================================== 00:09:13.231 Total : 6679.00 26.09 0.00 0.00 0.00 0.00 0.00 00:09:13.231 00:09:14.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.165 Nvme0n1 : 8.00 6685.50 26.12 0.00 0.00 0.00 0.00 0.00 00:09:14.165 =================================================================================================================== 00:09:14.165 Total : 6685.50 26.12 0.00 0.00 0.00 0.00 0.00 00:09:14.165 00:09:15.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.102 Nvme0n1 : 9.00 6704.67 26.19 0.00 0.00 0.00 0.00 0.00 00:09:15.102 =================================================================================================================== 00:09:15.102 Total : 6704.67 26.19 0.00 0.00 0.00 0.00 0.00 00:09:15.102 00:09:16.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.039 Nvme0n1 : 10.00 6707.30 26.20 0.00 0.00 0.00 0.00 0.00 00:09:16.039 =================================================================================================================== 00:09:16.039 Total : 6707.30 26.20 0.00 0.00 0.00 0.00 0.00 00:09:16.039 00:09:16.039 00:09:16.039 Latency(us) 00:09:16.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.039 Nvme0n1 : 10.01 6712.35 26.22 0.00 0.00 19064.41 5302.46 343170.33 00:09:16.039 =================================================================================================================== 00:09:16.039 Total : 6712.35 26.22 0.00 0.00 19064.41 5302.46 343170.33 00:09:16.039 0 00:09:16.039 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65995 00:09:16.039 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 65995 ']' 00:09:16.039 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 65995 00:09:16.039 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:16.039 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:16.039 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65995 00:09:16.039 killing process with pid 65995 00:09:16.039 Received shutdown signal, test time was about 10.000000 seconds 00:09:16.039 00:09:16.039 Latency(us) 00:09:16.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.039 =================================================================================================================== 00:09:16.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:16.039 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:16.039 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:16.039 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65995' 00:09:16.039 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 65995 00:09:16.039 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 65995 00:09:16.297 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.556 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:16.816 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:16.816 16:58:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65637 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65637 00:09:17.075 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65637 Killed "${NVMF_APP[@]}" "$@" 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66157 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66157 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66157 ']' 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.075 16:58:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:17.075 [2024-07-15 16:58:07.290018] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:17.075 [2024-07-15 16:58:07.290115] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.334 [2024-07-15 16:58:07.433158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.334 [2024-07-15 16:58:07.551976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.334 [2024-07-15 16:58:07.552038] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.334 [2024-07-15 16:58:07.552050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.334 [2024-07-15 16:58:07.552058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.334 [2024-07-15 16:58:07.552065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.334 [2024-07-15 16:58:07.552097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.334 [2024-07-15 16:58:07.607158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:18.270 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:18.270 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:18.270 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:18.270 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:18.270 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:18.270 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.270 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.529 [2024-07-15 16:58:08.601312] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:18.529 [2024-07-15 16:58:08.602084] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:18.529 [2024-07-15 16:58:08.602411] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:18.529 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:18.529 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6a2277b8-3df8-4afe-bdbf-31608a7bf637 00:09:18.529 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6a2277b8-3df8-4afe-bdbf-31608a7bf637 00:09:18.529 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:18.529 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:18.529 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:18.529 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:18.529 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:18.788 16:58:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a2277b8-3df8-4afe-bdbf-31608a7bf637 -t 2000 00:09:19.048 [ 00:09:19.048 { 00:09:19.048 "name": "6a2277b8-3df8-4afe-bdbf-31608a7bf637", 00:09:19.048 "aliases": [ 00:09:19.048 "lvs/lvol" 00:09:19.048 ], 00:09:19.048 "product_name": "Logical Volume", 00:09:19.048 "block_size": 4096, 00:09:19.048 "num_blocks": 38912, 00:09:19.048 "uuid": "6a2277b8-3df8-4afe-bdbf-31608a7bf637", 00:09:19.048 "assigned_rate_limits": { 00:09:19.048 "rw_ios_per_sec": 0, 00:09:19.048 "rw_mbytes_per_sec": 0, 00:09:19.048 "r_mbytes_per_sec": 0, 00:09:19.048 "w_mbytes_per_sec": 0 00:09:19.048 }, 00:09:19.048 "claimed": false, 00:09:19.048 "zoned": false, 00:09:19.048 "supported_io_types": { 00:09:19.048 "read": true, 00:09:19.048 "write": true, 00:09:19.049 "unmap": true, 00:09:19.049 "flush": false, 00:09:19.049 "reset": true, 00:09:19.049 "nvme_admin": false, 00:09:19.049 "nvme_io": false, 00:09:19.049 "nvme_io_md": false, 00:09:19.049 "write_zeroes": true, 00:09:19.049 "zcopy": false, 00:09:19.049 "get_zone_info": false, 00:09:19.049 "zone_management": false, 00:09:19.049 "zone_append": false, 00:09:19.049 "compare": false, 00:09:19.049 "compare_and_write": false, 00:09:19.049 "abort": false, 00:09:19.049 "seek_hole": true, 00:09:19.049 "seek_data": true, 00:09:19.049 "copy": false, 00:09:19.049 "nvme_iov_md": false 00:09:19.049 }, 00:09:19.049 "driver_specific": { 00:09:19.049 "lvol": { 00:09:19.049 "lvol_store_uuid": "3f38a169-aa67-4634-9de1-eba4a4aa9817", 00:09:19.049 "base_bdev": "aio_bdev", 00:09:19.049 "thin_provision": false, 00:09:19.049 "num_allocated_clusters": 38, 00:09:19.049 "snapshot": false, 00:09:19.049 "clone": false, 00:09:19.049 "esnap_clone": false 00:09:19.049 } 00:09:19.049 } 00:09:19.049 } 00:09:19.049 ] 00:09:19.049 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:19.049 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:19.049 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:19.307 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:19.308 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:19.308 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:19.566 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:19.566 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.824 [2024-07-15 16:58:09.910615] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:19.824 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:19.824 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:19.824 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:19.824 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.825 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.825 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.825 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.825 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.825 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.825 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.825 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:19.825 16:58:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:20.084 request: 00:09:20.084 { 00:09:20.084 "uuid": "3f38a169-aa67-4634-9de1-eba4a4aa9817", 00:09:20.084 "method": "bdev_lvol_get_lvstores", 00:09:20.084 "req_id": 1 00:09:20.084 } 00:09:20.084 Got JSON-RPC error response 00:09:20.084 response: 00:09:20.084 { 00:09:20.084 "code": -19, 00:09:20.084 "message": "No such device" 00:09:20.084 } 00:09:20.084 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:20.084 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:20.084 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:20.084 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:20.084 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:20.342 aio_bdev 00:09:20.342 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6a2277b8-3df8-4afe-bdbf-31608a7bf637 00:09:20.342 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6a2277b8-3df8-4afe-bdbf-31608a7bf637 00:09:20.342 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:20.342 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:20.342 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:20.342 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:20.342 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:20.600 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a2277b8-3df8-4afe-bdbf-31608a7bf637 -t 2000 00:09:20.859 [ 00:09:20.859 { 00:09:20.859 "name": "6a2277b8-3df8-4afe-bdbf-31608a7bf637", 00:09:20.859 "aliases": [ 00:09:20.859 "lvs/lvol" 00:09:20.859 ], 00:09:20.859 "product_name": "Logical Volume", 00:09:20.859 "block_size": 4096, 00:09:20.859 "num_blocks": 38912, 00:09:20.859 "uuid": "6a2277b8-3df8-4afe-bdbf-31608a7bf637", 00:09:20.859 "assigned_rate_limits": { 00:09:20.859 "rw_ios_per_sec": 0, 00:09:20.859 "rw_mbytes_per_sec": 0, 00:09:20.859 "r_mbytes_per_sec": 0, 00:09:20.859 "w_mbytes_per_sec": 0 00:09:20.859 }, 00:09:20.859 "claimed": false, 00:09:20.859 "zoned": false, 00:09:20.859 "supported_io_types": { 00:09:20.859 "read": true, 00:09:20.859 "write": true, 00:09:20.859 "unmap": true, 00:09:20.859 "flush": false, 00:09:20.859 "reset": true, 00:09:20.859 "nvme_admin": false, 00:09:20.859 "nvme_io": false, 00:09:20.859 "nvme_io_md": false, 00:09:20.859 "write_zeroes": true, 00:09:20.859 "zcopy": false, 00:09:20.859 "get_zone_info": false, 00:09:20.859 "zone_management": false, 00:09:20.859 "zone_append": false, 00:09:20.859 "compare": false, 00:09:20.859 "compare_and_write": false, 00:09:20.859 "abort": false, 00:09:20.859 "seek_hole": true, 00:09:20.859 "seek_data": true, 00:09:20.859 "copy": false, 00:09:20.859 "nvme_iov_md": false 00:09:20.859 }, 00:09:20.859 "driver_specific": { 00:09:20.859 "lvol": { 00:09:20.859 "lvol_store_uuid": "3f38a169-aa67-4634-9de1-eba4a4aa9817", 00:09:20.859 "base_bdev": "aio_bdev", 00:09:20.859 "thin_provision": false, 00:09:20.859 "num_allocated_clusters": 38, 00:09:20.859 "snapshot": false, 00:09:20.859 "clone": false, 00:09:20.859 "esnap_clone": false 00:09:20.859 } 00:09:20.859 } 00:09:20.859 } 00:09:20.859 ] 00:09:20.859 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:20.859 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:20.859 16:58:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:21.118 16:58:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:21.118 16:58:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:21.118 16:58:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:21.376 16:58:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:21.376 16:58:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6a2277b8-3df8-4afe-bdbf-31608a7bf637 00:09:21.634 16:58:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f38a169-aa67-4634-9de1-eba4a4aa9817 00:09:21.927 16:58:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:21.927 16:58:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:22.496 ************************************ 00:09:22.496 END TEST lvs_grow_dirty 00:09:22.496 ************************************ 00:09:22.496 00:09:22.496 real 0m20.828s 00:09:22.496 user 0m44.027s 00:09:22.496 sys 0m8.004s 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:22.496 nvmf_trace.0 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.496 16:58:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:22.755 rmmod nvme_tcp 00:09:22.755 rmmod nvme_fabrics 00:09:22.755 rmmod nvme_keyring 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66157 ']' 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66157 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66157 ']' 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66157 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66157 00:09:22.755 killing process with pid 66157 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66157' 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66157 00:09:22.755 16:58:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66157 00:09:23.014 16:58:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:23.014 16:58:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:23.014 16:58:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:23.014 16:58:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:23.014 16:58:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:23.014 16:58:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.014 16:58:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.014 16:58:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.014 16:58:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:23.014 ************************************ 00:09:23.014 END TEST nvmf_lvs_grow 00:09:23.014 ************************************ 00:09:23.014 00:09:23.014 real 0m41.723s 00:09:23.014 user 1m7.792s 00:09:23.014 sys 0m11.344s 00:09:23.014 16:58:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:23.014 16:58:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:23.014 16:58:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:23.014 16:58:13 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:23.014 16:58:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:23.014 16:58:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.014 16:58:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:23.014 ************************************ 00:09:23.014 START TEST nvmf_bdev_io_wait 00:09:23.014 ************************************ 00:09:23.014 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:23.014 * Looking for test storage... 00:09:23.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:23.014 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:23.014 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:23.273 Cannot find device "nvmf_tgt_br" 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:23.273 Cannot find device "nvmf_tgt_br2" 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:23.273 Cannot find device "nvmf_tgt_br" 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:23.273 Cannot find device "nvmf_tgt_br2" 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:23.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:23.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:23.273 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:23.532 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:23.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:09:23.533 00:09:23.533 --- 10.0.0.2 ping statistics --- 00:09:23.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.533 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:23.533 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:23.533 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:09:23.533 00:09:23.533 --- 10.0.0.3 ping statistics --- 00:09:23.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.533 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:23.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:09:23.533 00:09:23.533 --- 10.0.0.1 ping statistics --- 00:09:23.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.533 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66470 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66470 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66470 ']' 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.533 16:58:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.533 [2024-07-15 16:58:13.738400] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:23.533 [2024-07-15 16:58:13.738509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.793 [2024-07-15 16:58:13.878355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.793 [2024-07-15 16:58:13.995277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.793 [2024-07-15 16:58:13.995603] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.793 [2024-07-15 16:58:13.995737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.793 [2024-07-15 16:58:13.995854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.793 [2024-07-15 16:58:13.996034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.793 [2024-07-15 16:58:13.996245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.793 [2024-07-15 16:58:13.996298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.793 [2024-07-15 16:58:13.996387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.793 [2024-07-15 16:58:13.996394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.730 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.730 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:24.730 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.730 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.730 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 [2024-07-15 16:58:14.846919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 [2024-07-15 16:58:14.859188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 Malloc0 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 [2024-07-15 16:58:14.923375] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66505 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66507 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66509 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.731 { 00:09:24.731 "params": { 00:09:24.731 "name": "Nvme$subsystem", 00:09:24.731 "trtype": "$TEST_TRANSPORT", 00:09:24.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.731 "adrfam": "ipv4", 00:09:24.731 "trsvcid": "$NVMF_PORT", 00:09:24.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.731 "hdgst": ${hdgst:-false}, 00:09:24.731 "ddgst": ${ddgst:-false} 00:09:24.731 }, 00:09:24.731 "method": "bdev_nvme_attach_controller" 00:09:24.731 } 00:09:24.731 EOF 00:09:24.731 )") 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.731 { 00:09:24.731 "params": { 00:09:24.731 "name": "Nvme$subsystem", 00:09:24.731 "trtype": "$TEST_TRANSPORT", 00:09:24.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.731 "adrfam": "ipv4", 00:09:24.731 "trsvcid": "$NVMF_PORT", 00:09:24.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.731 "hdgst": ${hdgst:-false}, 00:09:24.731 "ddgst": ${ddgst:-false} 00:09:24.731 }, 00:09:24.731 "method": "bdev_nvme_attach_controller" 00:09:24.731 } 00:09:24.731 EOF 00:09:24.731 )") 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66512 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.731 { 00:09:24.731 "params": { 00:09:24.731 "name": "Nvme$subsystem", 00:09:24.731 "trtype": "$TEST_TRANSPORT", 00:09:24.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.731 "adrfam": "ipv4", 00:09:24.731 "trsvcid": "$NVMF_PORT", 00:09:24.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.731 "hdgst": ${hdgst:-false}, 00:09:24.731 "ddgst": ${ddgst:-false} 00:09:24.731 }, 00:09:24.731 "method": "bdev_nvme_attach_controller" 00:09:24.731 } 00:09:24.731 EOF 00:09:24.731 )") 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.731 "params": { 00:09:24.731 "name": "Nvme1", 00:09:24.731 "trtype": "tcp", 00:09:24.731 "traddr": "10.0.0.2", 00:09:24.731 "adrfam": "ipv4", 00:09:24.731 "trsvcid": "4420", 00:09:24.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.731 "hdgst": false, 00:09:24.731 "ddgst": false 00:09:24.731 }, 00:09:24.731 "method": "bdev_nvme_attach_controller" 00:09:24.731 }' 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.731 "params": { 00:09:24.731 "name": "Nvme1", 00:09:24.731 "trtype": "tcp", 00:09:24.731 "traddr": "10.0.0.2", 00:09:24.731 "adrfam": "ipv4", 00:09:24.731 "trsvcid": "4420", 00:09:24.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.731 "hdgst": false, 00:09:24.731 "ddgst": false 00:09:24.731 }, 00:09:24.731 "method": "bdev_nvme_attach_controller" 00:09:24.731 }' 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.731 "params": { 00:09:24.731 "name": "Nvme1", 00:09:24.731 "trtype": "tcp", 00:09:24.731 "traddr": "10.0.0.2", 00:09:24.731 "adrfam": "ipv4", 00:09:24.731 "trsvcid": "4420", 00:09:24.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.731 "hdgst": false, 00:09:24.731 "ddgst": false 00:09:24.731 }, 00:09:24.731 "method": "bdev_nvme_attach_controller" 00:09:24.731 }' 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.731 { 00:09:24.731 "params": { 00:09:24.731 "name": "Nvme$subsystem", 00:09:24.731 "trtype": "$TEST_TRANSPORT", 00:09:24.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.731 "adrfam": "ipv4", 00:09:24.731 "trsvcid": "$NVMF_PORT", 00:09:24.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.731 "hdgst": ${hdgst:-false}, 00:09:24.731 "ddgst": ${ddgst:-false} 00:09:24.731 }, 00:09:24.731 "method": "bdev_nvme_attach_controller" 00:09:24.731 } 00:09:24.731 EOF 00:09:24.731 )") 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:24.731 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:24.732 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.732 "params": { 00:09:24.732 "name": "Nvme1", 00:09:24.732 "trtype": "tcp", 00:09:24.732 "traddr": "10.0.0.2", 00:09:24.732 "adrfam": "ipv4", 00:09:24.732 "trsvcid": "4420", 00:09:24.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.732 "hdgst": false, 00:09:24.732 "ddgst": false 00:09:24.732 }, 00:09:24.732 "method": "bdev_nvme_attach_controller" 00:09:24.732 }' 00:09:24.732 16:58:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66505 00:09:24.732 [2024-07-15 16:58:14.985758] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:24.732 [2024-07-15 16:58:14.985843] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:24.732 [2024-07-15 16:58:14.994438] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:24.732 [2024-07-15 16:58:14.994533] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:24.732 [2024-07-15 16:58:15.013900] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:24.732 [2024-07-15 16:58:15.013985] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:25.006 [2024-07-15 16:58:15.028807] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:25.006 [2024-07-15 16:58:15.028907] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:25.006 [2024-07-15 16:58:15.196814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.006 [2024-07-15 16:58:15.272715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.006 [2024-07-15 16:58:15.297333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:25.264 [2024-07-15 16:58:15.347224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:25.265 [2024-07-15 16:58:15.350492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.265 [2024-07-15 16:58:15.391483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:25.265 [2024-07-15 16:58:15.427246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.265 [2024-07-15 16:58:15.440854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:25.265 Running I/O for 1 seconds... 00:09:25.265 [2024-07-15 16:58:15.471246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:25.265 [2024-07-15 16:58:15.519345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:25.265 [2024-07-15 16:58:15.522968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:25.265 Running I/O for 1 seconds... 00:09:25.522 [2024-07-15 16:58:15.565712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:25.522 Running I/O for 1 seconds... 00:09:25.522 Running I/O for 1 seconds... 00:09:26.451 00:09:26.452 Latency(us) 00:09:26.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.452 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:26.452 Nvme1n1 : 1.02 6595.77 25.76 0.00 0.00 19140.52 7089.80 38368.35 00:09:26.452 =================================================================================================================== 00:09:26.452 Total : 6595.77 25.76 0.00 0.00 19140.52 7089.80 38368.35 00:09:26.452 00:09:26.452 Latency(us) 00:09:26.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.452 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:26.452 Nvme1n1 : 1.01 8427.42 32.92 0.00 0.00 15098.18 10187.87 25380.31 00:09:26.452 =================================================================================================================== 00:09:26.452 Total : 8427.42 32.92 0.00 0.00 15098.18 10187.87 25380.31 00:09:26.452 00:09:26.452 Latency(us) 00:09:26.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.452 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:26.452 Nvme1n1 : 1.01 6807.27 26.59 0.00 0.00 18742.27 5391.83 45517.73 00:09:26.452 =================================================================================================================== 00:09:26.452 Total : 6807.27 26.59 0.00 0.00 18742.27 5391.83 45517.73 00:09:26.452 00:09:26.452 Latency(us) 00:09:26.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.452 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:26.452 Nvme1n1 : 1.00 159512.86 623.10 0.00 0.00 799.43 389.12 2398.02 00:09:26.452 =================================================================================================================== 00:09:26.452 Total : 159512.86 623.10 0.00 0.00 799.43 389.12 2398.02 00:09:26.452 16:58:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66507 00:09:26.709 16:58:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66509 00:09:26.709 16:58:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66512 00:09:26.709 16:58:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.709 16:58:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.709 16:58:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.709 16:58:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.709 16:58:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:26.709 16:58:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:26.709 16:58:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.709 16:58:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:26.709 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.709 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:26.709 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.709 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.967 rmmod nvme_tcp 00:09:26.967 rmmod nvme_fabrics 00:09:26.967 rmmod nvme_keyring 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66470 ']' 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66470 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66470 ']' 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66470 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66470 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:26.967 killing process with pid 66470 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66470' 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66470 00:09:26.967 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66470 00:09:27.225 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.225 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.225 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.225 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.225 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.225 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.225 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.225 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.225 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:27.225 00:09:27.225 real 0m4.118s 00:09:27.225 user 0m18.206s 00:09:27.225 sys 0m2.131s 00:09:27.225 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.225 16:58:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.225 ************************************ 00:09:27.225 END TEST nvmf_bdev_io_wait 00:09:27.225 ************************************ 00:09:27.225 16:58:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:27.225 16:58:17 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:27.225 16:58:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.225 16:58:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.225 16:58:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.225 ************************************ 00:09:27.225 START TEST nvmf_queue_depth 00:09:27.225 ************************************ 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:27.225 * Looking for test storage... 00:09:27.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:27.225 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:27.481 Cannot find device "nvmf_tgt_br" 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.481 Cannot find device "nvmf_tgt_br2" 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:27.481 Cannot find device "nvmf_tgt_br" 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:27.481 Cannot find device "nvmf_tgt_br2" 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:27.481 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:27.482 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.482 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:27.482 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:27.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:09:27.738 00:09:27.738 --- 10.0.0.2 ping statistics --- 00:09:27.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.738 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:27.738 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:27.738 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:09:27.738 00:09:27.738 --- 10.0.0.3 ping statistics --- 00:09:27.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.738 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:27.738 00:09:27.738 --- 10.0.0.1 ping statistics --- 00:09:27.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.738 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66746 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66746 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66746 ']' 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.738 16:58:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:27.738 [2024-07-15 16:58:17.882236] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:27.738 [2024-07-15 16:58:17.882340] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.738 [2024-07-15 16:58:18.022433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.996 [2024-07-15 16:58:18.143256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.996 [2024-07-15 16:58:18.143308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.996 [2024-07-15 16:58:18.143330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.996 [2024-07-15 16:58:18.143338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.996 [2024-07-15 16:58:18.143345] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.996 [2024-07-15 16:58:18.143382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.996 [2024-07-15 16:58:18.198686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.630 [2024-07-15 16:58:18.872133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.630 Malloc0 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.630 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.630 [2024-07-15 16:58:18.926694] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.888 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.888 16:58:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66778 00:09:28.888 16:58:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:28.888 16:58:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:28.888 16:58:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66778 /var/tmp/bdevperf.sock 00:09:28.888 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66778 ']' 00:09:28.888 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:28.888 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:28.888 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:28.888 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.888 16:58:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.888 [2024-07-15 16:58:18.978150] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:28.888 [2024-07-15 16:58:18.978237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66778 ] 00:09:28.888 [2024-07-15 16:58:19.112765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.146 [2024-07-15 16:58:19.231432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.146 [2024-07-15 16:58:19.287855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.712 16:58:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.712 16:58:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:29.712 16:58:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:29.712 16:58:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.712 16:58:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.969 NVMe0n1 00:09:29.969 16:58:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.969 16:58:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:29.969 Running I/O for 10 seconds... 00:09:40.021 00:09:40.021 Latency(us) 00:09:40.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.021 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:40.021 Verification LBA range: start 0x0 length 0x4000 00:09:40.021 NVMe0n1 : 10.09 7580.02 29.61 0.00 0.00 134332.85 27882.59 100567.97 00:09:40.021 =================================================================================================================== 00:09:40.021 Total : 7580.02 29.61 0.00 0.00 134332.85 27882.59 100567.97 00:09:40.021 0 00:09:40.021 16:58:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66778 00:09:40.021 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66778 ']' 00:09:40.021 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66778 00:09:40.021 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:40.021 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.021 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66778 00:09:40.021 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.021 killing process with pid 66778 00:09:40.021 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.021 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66778' 00:09:40.021 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66778 00:09:40.021 Received shutdown signal, test time was about 10.000000 seconds 00:09:40.021 00:09:40.021 Latency(us) 00:09:40.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.021 =================================================================================================================== 00:09:40.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:40.021 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66778 00:09:40.278 16:58:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:40.278 16:58:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:40.278 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.278 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:40.278 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.278 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:40.278 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.278 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.278 rmmod nvme_tcp 00:09:40.537 rmmod nvme_fabrics 00:09:40.537 rmmod nvme_keyring 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66746 ']' 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66746 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66746 ']' 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66746 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66746 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66746' 00:09:40.537 killing process with pid 66746 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66746 00:09:40.537 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66746 00:09:40.796 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.796 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.796 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.796 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.796 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.796 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.796 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.796 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.796 16:58:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:40.796 00:09:40.796 real 0m13.538s 00:09:40.796 user 0m23.345s 00:09:40.796 sys 0m2.335s 00:09:40.796 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.796 ************************************ 00:09:40.796 16:58:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.796 END TEST nvmf_queue_depth 00:09:40.796 ************************************ 00:09:40.796 16:58:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:40.796 16:58:30 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:40.796 16:58:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:40.796 16:58:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.796 16:58:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.796 ************************************ 00:09:40.796 START TEST nvmf_target_multipath 00:09:40.796 ************************************ 00:09:40.796 16:58:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:40.796 * Looking for test storage... 00:09:40.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.796 16:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:41.055 Cannot find device "nvmf_tgt_br" 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.055 Cannot find device "nvmf_tgt_br2" 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:41.055 Cannot find device "nvmf_tgt_br" 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:41.055 Cannot find device "nvmf_tgt_br2" 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.055 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:41.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:09:41.313 00:09:41.313 --- 10.0.0.2 ping statistics --- 00:09:41.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.313 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:41.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:09:41.313 00:09:41.313 --- 10.0.0.3 ping statistics --- 00:09:41.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.313 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:09:41.313 00:09:41.313 --- 10.0.0.1 ping statistics --- 00:09:41.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.313 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:41.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67099 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67099 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67099 ']' 00:09:41.313 16:58:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.314 16:58:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.314 16:58:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.314 16:58:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.314 16:58:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:41.314 [2024-07-15 16:58:31.506423] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:41.314 [2024-07-15 16:58:31.506505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.572 [2024-07-15 16:58:31.645831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.572 [2024-07-15 16:58:31.775165] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.572 [2024-07-15 16:58:31.775564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.572 [2024-07-15 16:58:31.775823] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.572 [2024-07-15 16:58:31.775967] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.572 [2024-07-15 16:58:31.776157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.572 [2024-07-15 16:58:31.776395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.572 [2024-07-15 16:58:31.776499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.572 [2024-07-15 16:58:31.776626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.572 [2024-07-15 16:58:31.776631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.572 [2024-07-15 16:58:31.835696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:42.505 16:58:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.505 16:58:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:09:42.505 16:58:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:42.505 16:58:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:42.505 16:58:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:42.505 16:58:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.505 16:58:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:42.505 [2024-07-15 16:58:32.725521] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.505 16:58:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:42.764 Malloc0 00:09:43.022 16:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:43.280 16:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.537 16:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.795 [2024-07-15 16:58:33.838575] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.795 16:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:43.795 [2024-07-15 16:58:34.074766] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:44.052 16:58:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid=0b4e8503-7bac-4879-926a-209303c4b3da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:44.052 16:58:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid=0b4e8503-7bac-4879-926a-209303c4b3da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:44.308 16:58:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:44.308 16:58:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:44.308 16:58:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.308 16:58:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:44.308 16:58:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67194 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:46.199 16:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:46.199 [global] 00:09:46.199 thread=1 00:09:46.199 invalidate=1 00:09:46.199 rw=randrw 00:09:46.199 time_based=1 00:09:46.199 runtime=6 00:09:46.199 ioengine=libaio 00:09:46.199 direct=1 00:09:46.199 bs=4096 00:09:46.199 iodepth=128 00:09:46.199 norandommap=0 00:09:46.199 numjobs=1 00:09:46.199 00:09:46.199 verify_dump=1 00:09:46.199 verify_backlog=512 00:09:46.199 verify_state_save=0 00:09:46.199 do_verify=1 00:09:46.199 verify=crc32c-intel 00:09:46.199 [job0] 00:09:46.199 filename=/dev/nvme0n1 00:09:46.199 Could not set queue depth (nvme0n1) 00:09:46.456 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.456 fio-3.35 00:09:46.456 Starting 1 thread 00:09:47.389 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:47.646 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:47.904 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:47.904 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:47.904 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:47.905 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:47.905 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:47.905 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:47.905 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:47.905 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:47.905 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:47.905 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:47.905 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:47.905 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:47.905 16:58:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:48.163 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:48.422 16:58:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67194 00:09:52.608 00:09:52.608 job0: (groupid=0, jobs=1): err= 0: pid=67215: Mon Jul 15 16:58:42 2024 00:09:52.608 read: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(239MiB/6002msec) 00:09:52.608 slat (usec): min=3, max=7307, avg=57.25, stdev=233.04 00:09:52.608 clat (usec): min=1424, max=18086, avg=8513.01, stdev=1557.19 00:09:52.608 lat (usec): min=1556, max=18104, avg=8570.26, stdev=1563.00 00:09:52.608 clat percentiles (usec): 00:09:52.608 | 1.00th=[ 4359], 5.00th=[ 6521], 10.00th=[ 7242], 20.00th=[ 7701], 00:09:52.608 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8586], 00:09:52.608 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9896], 95.00th=[12256], 00:09:52.608 | 99.00th=[13435], 99.50th=[13829], 99.90th=[16057], 99.95th=[16909], 00:09:52.608 | 99.99th=[17433] 00:09:52.608 bw ( KiB/s): min= 3344, max=27624, per=52.01%, avg=21251.64, stdev=7838.11, samples=11 00:09:52.608 iops : min= 836, max= 6906, avg=5312.91, stdev=1959.53, samples=11 00:09:52.608 write: IOPS=6146, BW=24.0MiB/s (25.2MB/s)(126MiB/5254msec); 0 zone resets 00:09:52.608 slat (usec): min=5, max=2986, avg=67.27, stdev=165.56 00:09:52.608 clat (usec): min=1466, max=17295, avg=7412.77, stdev=1279.15 00:09:52.608 lat (usec): min=1524, max=17320, avg=7480.04, stdev=1284.21 00:09:52.608 clat percentiles (usec): 00:09:52.608 | 1.00th=[ 3458], 5.00th=[ 4555], 10.00th=[ 6194], 20.00th=[ 6915], 00:09:52.608 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7701], 00:09:52.608 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8848], 00:09:52.608 | 99.00th=[11469], 99.50th=[12125], 99.90th=[14091], 99.95th=[15270], 00:09:52.608 | 99.99th=[16057] 00:09:52.608 bw ( KiB/s): min= 3264, max=26856, per=86.72%, avg=21320.00, stdev=7733.62, samples=11 00:09:52.608 iops : min= 816, max= 6714, avg=5330.00, stdev=1933.41, samples=11 00:09:52.608 lat (msec) : 2=0.06%, 4=1.37%, 10=91.61%, 20=6.96% 00:09:52.608 cpu : usr=5.35%, sys=21.48%, ctx=5401, majf=0, minf=145 00:09:52.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:52.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.608 issued rwts: total=61311,32292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.608 00:09:52.608 Run status group 0 (all jobs): 00:09:52.608 READ: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=239MiB (251MB), run=6002-6002msec 00:09:52.608 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=126MiB (132MB), run=5254-5254msec 00:09:52.608 00:09:52.608 Disk stats (read/write): 00:09:52.608 nvme0n1: ios=60571/31664, merge=0/0, ticks=494175/220233, in_queue=714408, util=98.58% 00:09:52.608 16:58:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:52.867 16:58:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67297 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:53.124 16:58:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:53.124 [global] 00:09:53.124 thread=1 00:09:53.124 invalidate=1 00:09:53.124 rw=randrw 00:09:53.124 time_based=1 00:09:53.124 runtime=6 00:09:53.124 ioengine=libaio 00:09:53.124 direct=1 00:09:53.124 bs=4096 00:09:53.124 iodepth=128 00:09:53.124 norandommap=0 00:09:53.124 numjobs=1 00:09:53.124 00:09:53.124 verify_dump=1 00:09:53.124 verify_backlog=512 00:09:53.124 verify_state_save=0 00:09:53.124 do_verify=1 00:09:53.124 verify=crc32c-intel 00:09:53.124 [job0] 00:09:53.124 filename=/dev/nvme0n1 00:09:53.124 Could not set queue depth (nvme0n1) 00:09:53.124 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.124 fio-3.35 00:09:53.124 Starting 1 thread 00:09:54.059 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:54.317 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:54.575 16:58:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:55.142 16:58:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67297 00:09:59.328 00:09:59.328 job0: (groupid=0, jobs=1): err= 0: pid=67318: Mon Jul 15 16:58:49 2024 00:09:59.328 read: IOPS=11.3k, BW=44.3MiB/s (46.4MB/s)(266MiB/6003msec) 00:09:59.328 slat (usec): min=6, max=6101, avg=43.52, stdev=193.11 00:09:59.328 clat (usec): min=606, max=16536, avg=7721.49, stdev=1956.42 00:09:59.328 lat (usec): min=622, max=16550, avg=7765.01, stdev=1972.30 00:09:59.328 clat percentiles (usec): 00:09:59.328 | 1.00th=[ 3032], 5.00th=[ 4293], 10.00th=[ 5014], 20.00th=[ 5997], 00:09:59.328 | 30.00th=[ 7111], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8291], 00:09:59.328 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[11338], 00:09:59.328 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14615], 99.95th=[15401], 00:09:59.328 | 99.99th=[16450] 00:09:59.328 bw ( KiB/s): min= 8144, max=35640, per=53.83%, avg=24406.55, stdev=7355.19, samples=11 00:09:59.328 iops : min= 2036, max= 8910, avg=6101.64, stdev=1838.80, samples=11 00:09:59.328 write: IOPS=6709, BW=26.2MiB/s (27.5MB/s)(143MiB/5442msec); 0 zone resets 00:09:59.328 slat (usec): min=12, max=1679, avg=54.27, stdev=140.42 00:09:59.328 clat (usec): min=1759, max=14400, avg=6488.92, stdev=1862.02 00:09:59.328 lat (usec): min=1781, max=15557, avg=6543.18, stdev=1879.34 00:09:59.328 clat percentiles (usec): 00:09:59.328 | 1.00th=[ 2737], 5.00th=[ 3425], 10.00th=[ 3818], 20.00th=[ 4424], 00:09:59.328 | 30.00th=[ 5145], 40.00th=[ 6521], 50.00th=[ 7177], 60.00th=[ 7439], 00:09:59.328 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8356], 95.00th=[ 8717], 00:09:59.328 | 99.00th=[10945], 99.50th=[11600], 99.90th=[12911], 99.95th=[13435], 00:09:59.328 | 99.99th=[14091] 00:09:59.328 bw ( KiB/s): min= 8256, max=36512, per=90.83%, avg=24376.00, stdev=7294.73, samples=11 00:09:59.328 iops : min= 2064, max= 9128, avg=6094.00, stdev=1823.68, samples=11 00:09:59.328 lat (usec) : 750=0.01%, 1000=0.01% 00:09:59.328 lat (msec) : 2=0.16%, 4=6.57%, 10=88.09%, 20=5.16% 00:09:59.328 cpu : usr=6.03%, sys=21.83%, ctx=5898, majf=0, minf=96 00:09:59.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:59.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.328 issued rwts: total=68044,36511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.328 00:09:59.328 Run status group 0 (all jobs): 00:09:59.328 READ: bw=44.3MiB/s (46.4MB/s), 44.3MiB/s-44.3MiB/s (46.4MB/s-46.4MB/s), io=266MiB (279MB), run=6003-6003msec 00:09:59.328 WRITE: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=143MiB (150MB), run=5442-5442msec 00:09:59.328 00:09:59.328 Disk stats (read/write): 00:09:59.328 nvme0n1: ios=67436/35658, merge=0/0, ticks=498583/216144, in_queue=714727, util=98.60% 00:09:59.328 16:58:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:59.328 16:58:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:59.328 16:58:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:59.328 16:58:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:59.328 16:58:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.328 16:58:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:59.328 16:58:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.586 16:58:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:59.586 16:58:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.844 16:58:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:59.844 16:58:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:59.844 16:58:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:59.844 16:58:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:59.844 16:58:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:59.844 16:58:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:59.844 16:58:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.844 16:58:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:59.844 16:58:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.844 16:58:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.844 rmmod nvme_tcp 00:09:59.844 rmmod nvme_fabrics 00:09:59.844 rmmod nvme_keyring 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67099 ']' 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67099 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67099 ']' 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67099 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67099 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:59.844 killing process with pid 67099 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67099' 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67099 00:09:59.844 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67099 00:10:00.102 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:00.102 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:00.102 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:00.102 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:00.102 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:00.102 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.102 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.102 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.102 16:58:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:00.102 00:10:00.102 real 0m19.362s 00:10:00.102 user 1m12.723s 00:10:00.102 sys 0m9.466s 00:10:00.102 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.102 16:58:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:00.102 ************************************ 00:10:00.102 END TEST nvmf_target_multipath 00:10:00.102 ************************************ 00:10:00.362 16:58:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:00.362 16:58:50 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:00.362 16:58:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:00.362 16:58:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.362 16:58:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:00.362 ************************************ 00:10:00.362 START TEST nvmf_zcopy 00:10:00.362 ************************************ 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:00.362 * Looking for test storage... 00:10:00.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.362 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:00.363 Cannot find device "nvmf_tgt_br" 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:00.363 Cannot find device "nvmf_tgt_br2" 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:00.363 Cannot find device "nvmf_tgt_br" 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:00.363 Cannot find device "nvmf_tgt_br2" 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:00.363 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:00.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:00.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:00.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:10:00.622 00:10:00.622 --- 10.0.0.2 ping statistics --- 00:10:00.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.622 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:00.622 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:00.622 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:00.622 00:10:00.622 --- 10.0.0.3 ping statistics --- 00:10:00.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.622 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:00.622 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:00.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:00.622 00:10:00.622 --- 10.0.0.1 ping statistics --- 00:10:00.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.622 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67572 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67572 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67572 ']' 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.623 16:58:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.882 [2024-07-15 16:58:50.941735] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:00.882 [2024-07-15 16:58:50.941841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.882 [2024-07-15 16:58:51.084779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.141 [2024-07-15 16:58:51.202239] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.141 [2024-07-15 16:58:51.202308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.141 [2024-07-15 16:58:51.202336] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.141 [2024-07-15 16:58:51.202345] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.141 [2024-07-15 16:58:51.202352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.141 [2024-07-15 16:58:51.202389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.141 [2024-07-15 16:58:51.260245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.709 [2024-07-15 16:58:51.944254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.709 [2024-07-15 16:58:51.960327] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.709 malloc0 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:01.709 16:58:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:01.709 { 00:10:01.709 "params": { 00:10:01.709 "name": "Nvme$subsystem", 00:10:01.709 "trtype": "$TEST_TRANSPORT", 00:10:01.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.709 "adrfam": "ipv4", 00:10:01.709 "trsvcid": "$NVMF_PORT", 00:10:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.709 "hdgst": ${hdgst:-false}, 00:10:01.709 "ddgst": ${ddgst:-false} 00:10:01.709 }, 00:10:01.709 "method": "bdev_nvme_attach_controller" 00:10:01.709 } 00:10:01.709 EOF 00:10:01.709 )") 00:10:01.709 16:58:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:01.709 16:58:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:01.968 16:58:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:01.968 16:58:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:01.968 "params": { 00:10:01.968 "name": "Nvme1", 00:10:01.968 "trtype": "tcp", 00:10:01.968 "traddr": "10.0.0.2", 00:10:01.968 "adrfam": "ipv4", 00:10:01.968 "trsvcid": "4420", 00:10:01.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.968 "hdgst": false, 00:10:01.968 "ddgst": false 00:10:01.968 }, 00:10:01.968 "method": "bdev_nvme_attach_controller" 00:10:01.968 }' 00:10:01.968 [2024-07-15 16:58:52.046809] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:01.968 [2024-07-15 16:58:52.046917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67605 ] 00:10:01.968 [2024-07-15 16:58:52.183989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.227 [2024-07-15 16:58:52.320842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.227 [2024-07-15 16:58:52.389833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:02.227 Running I/O for 10 seconds... 00:10:12.259 00:10:12.259 Latency(us) 00:10:12.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.259 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:12.259 Verification LBA range: start 0x0 length 0x1000 00:10:12.259 Nvme1n1 : 10.02 5878.93 45.93 0.00 0.00 21704.01 2129.92 31457.28 00:10:12.259 =================================================================================================================== 00:10:12.259 Total : 5878.93 45.93 0.00 0.00 21704.01 2129.92 31457.28 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67721 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:12.828 { 00:10:12.828 "params": { 00:10:12.828 "name": "Nvme$subsystem", 00:10:12.828 "trtype": "$TEST_TRANSPORT", 00:10:12.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.828 "adrfam": "ipv4", 00:10:12.828 "trsvcid": "$NVMF_PORT", 00:10:12.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.828 "hdgst": ${hdgst:-false}, 00:10:12.828 "ddgst": ${ddgst:-false} 00:10:12.828 }, 00:10:12.828 "method": "bdev_nvme_attach_controller" 00:10:12.828 } 00:10:12.828 EOF 00:10:12.828 )") 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:12.828 [2024-07-15 16:59:02.865170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.865219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:12.828 16:59:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:12.828 "params": { 00:10:12.828 "name": "Nvme1", 00:10:12.828 "trtype": "tcp", 00:10:12.828 "traddr": "10.0.0.2", 00:10:12.828 "adrfam": "ipv4", 00:10:12.828 "trsvcid": "4420", 00:10:12.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.828 "hdgst": false, 00:10:12.828 "ddgst": false 00:10:12.828 }, 00:10:12.828 "method": "bdev_nvme_attach_controller" 00:10:12.828 }' 00:10:12.828 [2024-07-15 16:59:02.877131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.877159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.885120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.885145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.893120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.893144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.901122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.901146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.909125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.909148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.912955] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:12.828 [2024-07-15 16:59:02.913030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67721 ] 00:10:12.828 [2024-07-15 16:59:02.917127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.917151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.925131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.925156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.933137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.933160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.941136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.941178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.949153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.949176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.957140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.957182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.965141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.965166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.973145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.973168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.981160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.981184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.828 [2024-07-15 16:59:02.989162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.828 [2024-07-15 16:59:02.989202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:02.997146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:02.997168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.005162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.005186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.013150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.013172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.021154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.021177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.029156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.029179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.037164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.037188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.045175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.045217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.049093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.829 [2024-07-15 16:59:03.053178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.053205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.061179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.061203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.069180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.069203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.081184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.081207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.089186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.089209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.097204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.097227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.105190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.105214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.113202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.113225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.829 [2024-07-15 16:59:03.121188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.829 [2024-07-15 16:59:03.121210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.089 [2024-07-15 16:59:03.129196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.089 [2024-07-15 16:59:03.129234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.089 [2024-07-15 16:59:03.137195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.089 [2024-07-15 16:59:03.137217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.089 [2024-07-15 16:59:03.145207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.089 [2024-07-15 16:59:03.145234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.089 [2024-07-15 16:59:03.153205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.089 [2024-07-15 16:59:03.153228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.089 [2024-07-15 16:59:03.161203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.089 [2024-07-15 16:59:03.161225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.089 [2024-07-15 16:59:03.169205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.089 [2024-07-15 16:59:03.169228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.089 [2024-07-15 16:59:03.177207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.089 [2024-07-15 16:59:03.177229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.089 [2024-07-15 16:59:03.185212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.090 [2024-07-15 16:59:03.185235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.090 [2024-07-15 16:59:03.191456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.090 [2024-07-15 16:59:03.193226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.090 [2024-07-15 16:59:03.193249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.090 [2024-07-15 16:59:03.201233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.090 [2024-07-15 16:59:03.201255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.090 [2024-07-15 16:59:03.209221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.090 [2024-07-15 16:59:03.209244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.090 [2024-07-15 16:59:03.217220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.090 [2024-07-15 16:59:03.217242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.090 [2024-07-15 16:59:03.225256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.090 [2024-07-15 16:59:03.225279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.090 [2024-07-15 16:59:03.233222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.090 [2024-07-15 16:59:03.233244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.091 [2024-07-15 16:59:03.241223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.091 [2024-07-15 16:59:03.241262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.091 [2024-07-15 16:59:03.249224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.091 [2024-07-15 16:59:03.249247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.091 [2024-07-15 16:59:03.257230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.091 [2024-07-15 16:59:03.257253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.091 [2024-07-15 16:59:03.265232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.091 [2024-07-15 16:59:03.265256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.091 [2024-07-15 16:59:03.271176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:13.091 [2024-07-15 16:59:03.273234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.091 [2024-07-15 16:59:03.273258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.091 [2024-07-15 16:59:03.281233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.281256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.289235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.289257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.297258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.297281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.305243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.305265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.313249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.313273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.321240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.321262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.329267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.329295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.337266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.337309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.345274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.345318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.353282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.353328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.361305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.361338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.369314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.369360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.377299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.377341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.092 [2024-07-15 16:59:03.385304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.092 [2024-07-15 16:59:03.385345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.393325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.393397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.401345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.401397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 Running I/O for 5 seconds... 00:10:13.352 [2024-07-15 16:59:03.412946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.412993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.422583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.422617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.434570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.434604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.445474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.445506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.458491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.458525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.469012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.469060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.480333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.480390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.492030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.492063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.504879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.504912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.522876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.522910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.537778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.537828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.547592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.547637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.559241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.559289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.570668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.570703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.587107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.587165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.596985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.597043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.608610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.608643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.621487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.621541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.631339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.631412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.352 [2024-07-15 16:59:03.644321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.352 [2024-07-15 16:59:03.644387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.610 [2024-07-15 16:59:03.655929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.610 [2024-07-15 16:59:03.655978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.666908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.666957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.678359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.678416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.689551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.689586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.704784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.704835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.715110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.715158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.730173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.730222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.741030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.741077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.751749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.751785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.763335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.763408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.779039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.779072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.789136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.789186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.800810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.800843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.811640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.811676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.826948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.826996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.837354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.837413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.849313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.849360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.860519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.860553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.872282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.872315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.888680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.888713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.611 [2024-07-15 16:59:03.906527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.611 [2024-07-15 16:59:03.906575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:03.917657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:03.917689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:03.935256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:03.935290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:03.951745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:03.951780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:03.968345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:03.968423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:03.978188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:03.978237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:03.990412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:03.990441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:04.001515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:04.001548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:04.014680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:04.014712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:04.025682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:04.025745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:04.037289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:04.037336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:04.047948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:04.047995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:04.059489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:04.059522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:04.070308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:04.070354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.869 [2024-07-15 16:59:04.084032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.869 [2024-07-15 16:59:04.084096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.870 [2024-07-15 16:59:04.099828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.870 [2024-07-15 16:59:04.099876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.870 [2024-07-15 16:59:04.109315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.870 [2024-07-15 16:59:04.109361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.870 [2024-07-15 16:59:04.121123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.870 [2024-07-15 16:59:04.121155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.870 [2024-07-15 16:59:04.132032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.870 [2024-07-15 16:59:04.132067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.870 [2024-07-15 16:59:04.143222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.870 [2024-07-15 16:59:04.143256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.870 [2024-07-15 16:59:04.155872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.870 [2024-07-15 16:59:04.155905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.167647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.167699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.181757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.181829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.195916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.195968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.213134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.213171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.225055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.225125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.238825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.238931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.252181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.252231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.266992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.267041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.286832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.286882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.298097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.298147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.314533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.314573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.128 [2024-07-15 16:59:04.324782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.128 [2024-07-15 16:59:04.324832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.129 [2024-07-15 16:59:04.336295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.129 [2024-07-15 16:59:04.336344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.129 [2024-07-15 16:59:04.347237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.129 [2024-07-15 16:59:04.347288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.129 [2024-07-15 16:59:04.358556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.129 [2024-07-15 16:59:04.358593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.129 [2024-07-15 16:59:04.369639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.129 [2024-07-15 16:59:04.369675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.129 [2024-07-15 16:59:04.380668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.129 [2024-07-15 16:59:04.380716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.129 [2024-07-15 16:59:04.392031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.129 [2024-07-15 16:59:04.392080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.129 [2024-07-15 16:59:04.406593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.129 [2024-07-15 16:59:04.406628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.129 [2024-07-15 16:59:04.417545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.129 [2024-07-15 16:59:04.417578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.428875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.428924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.444698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.444733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.461327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.461373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.471252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.471300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.482317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.482365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.493049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.493096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.504114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.504164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.516806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.516839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.526074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.526108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.539265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.539312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.555974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.556023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.565928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.565976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.577327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.577388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.588292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.588341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.601502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.601538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.617862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.617911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.635657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.635692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.646225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.646257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.661131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.661182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.387 [2024-07-15 16:59:04.671145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.387 [2024-07-15 16:59:04.671193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.388 [2024-07-15 16:59:04.682621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.388 [2024-07-15 16:59:04.682657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.693890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.693923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.704970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.705020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.715706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.715742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.727767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.727801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.742854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.742889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.752117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.752167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.764208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.764258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.779504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.779561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.793790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.793838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.809230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.809282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.818828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.818876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.835189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.835237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.852726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.852775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.862887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.862934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.873736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.873783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.886253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.886300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.896423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.646 [2024-07-15 16:59:04.896456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.646 [2024-07-15 16:59:04.908198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.647 [2024-07-15 16:59:04.908245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.647 [2024-07-15 16:59:04.919259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.647 [2024-07-15 16:59:04.919309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.647 [2024-07-15 16:59:04.930305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.647 [2024-07-15 16:59:04.930337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.647 [2024-07-15 16:59:04.943510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.647 [2024-07-15 16:59:04.943569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:04.954571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:04.954605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:04.969773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:04.969806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:04.984536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:04.984569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:04.993677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:04.993724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.005444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.005478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.016538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.016592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.029102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.029150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.038714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.038763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.051709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.051744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.062229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.062276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.076829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.076877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.093463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.093497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.103253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.103301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.114900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.114946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.125270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.125319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.135958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.135990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.147052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.147101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.158240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.158273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.174263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.174312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.183800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.183847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.905 [2024-07-15 16:59:05.195333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.905 [2024-07-15 16:59:05.195391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.206734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.206780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.219525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.219583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.228892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.228940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.243072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.243120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.253957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.254020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.264841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.264872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.276311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.276344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.289131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.289170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.306791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.306889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.322447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.322536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.333917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.333969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.345844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.345898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.360441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.360501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.370702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.370764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.385221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.385283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.394928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.394964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.410484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.410536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.428333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.428415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.443121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.443175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.163 [2024-07-15 16:59:05.452530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.163 [2024-07-15 16:59:05.452568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.463876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.463928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.474712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.474747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.485760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.485796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.497019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.497057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.507719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.507757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.518423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.518458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.531389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.531449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.547397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.547486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.564494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.564533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.574569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.574613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.586922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.586959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.598127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.598164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.612014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.612050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.628794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.628829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.646599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.646633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.657141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.657178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.671630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.671667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.681053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.681087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.696560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.696610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.421 [2024-07-15 16:59:05.713070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.421 [2024-07-15 16:59:05.713112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.723844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.723881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.735486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.735523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.746207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.746246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.757172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.757208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.774216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.774255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.783391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.783424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.796971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.797022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.812943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.812999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.829687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.829745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.846124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.846185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.862454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.862514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.880717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.880778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.896164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.896227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.906739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.906792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.921339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.921407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.931567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.931616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.946708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.946771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.962084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.962143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.684 [2024-07-15 16:59:05.972112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.684 [2024-07-15 16:59:05.972166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:05.985241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:05.985284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.001453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.001522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.017770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.017836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.035837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.035905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.050871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.050931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.066382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.066437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.076300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.076340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.087927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.087995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.098582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.098635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.113253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.113306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.123146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.123199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.138540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.138593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.148389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.148453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.160552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.160606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.171962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.172031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.184939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.184993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.201798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.201853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.215517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.215580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.224792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.224859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.236791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.236842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.247254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.247307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.258591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.258644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.973 [2024-07-15 16:59:06.269280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.973 [2024-07-15 16:59:06.269340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.280708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.280781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.293135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.293185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.303034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.303085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.315457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.315508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.325614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.325651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.336343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.336405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.348069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.348120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.357981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.358032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.369972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.370037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.384822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.384872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.401588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.401641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.411334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.411408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.425470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.425522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.435241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.435291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.450782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.450843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.466332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.466417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.476076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.476129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.491787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.491825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.507371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.507432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.232 [2024-07-15 16:59:06.516800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.232 [2024-07-15 16:59:06.516851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.530695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.530749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.545325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.545390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.554532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.554582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.569592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.569655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.580592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.580655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.595304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.595384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.612053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.612118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.629333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.629395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.639644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.639681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.654184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.654235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.664420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.664486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.679591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.679630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.696305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.696369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.705939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.705973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.717430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.717478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.727982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.728018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.743138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.743175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.760471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.760508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.775894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.775931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.491 [2024-07-15 16:59:06.785476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.491 [2024-07-15 16:59:06.785516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.797421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.797461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.812033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.812081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.827221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.827263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.836817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.836858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.852970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.853022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.868430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.868480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.884680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.884735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.894661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.894726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.909353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.909436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.920050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.920107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.934288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.934328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.943794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.943872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.956548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.956597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.972685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.972765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.989485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.989548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:06.999564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:06.999605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:07.014070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:07.014110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:07.024738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:07.024787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.751 [2024-07-15 16:59:07.039973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.751 [2024-07-15 16:59:07.040025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.010 [2024-07-15 16:59:07.055944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.010 [2024-07-15 16:59:07.056003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.010 [2024-07-15 16:59:07.065255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.010 [2024-07-15 16:59:07.065307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.010 [2024-07-15 16:59:07.077340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.010 [2024-07-15 16:59:07.077441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.010 [2024-07-15 16:59:07.093546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.010 [2024-07-15 16:59:07.093605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.010 [2024-07-15 16:59:07.102654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.010 [2024-07-15 16:59:07.102707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.010 [2024-07-15 16:59:07.117357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.010 [2024-07-15 16:59:07.117437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.010 [2024-07-15 16:59:07.126774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.010 [2024-07-15 16:59:07.126827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.010 [2024-07-15 16:59:07.142056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.010 [2024-07-15 16:59:07.142108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.010 [2024-07-15 16:59:07.152175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.010 [2024-07-15 16:59:07.152224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.010 [2024-07-15 16:59:07.166210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.010 [2024-07-15 16:59:07.166260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.011 [2024-07-15 16:59:07.176503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.011 [2024-07-15 16:59:07.176553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.011 [2024-07-15 16:59:07.191020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.011 [2024-07-15 16:59:07.191071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.011 [2024-07-15 16:59:07.201400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.011 [2024-07-15 16:59:07.201447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.011 [2024-07-15 16:59:07.213069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.011 [2024-07-15 16:59:07.213120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.011 [2024-07-15 16:59:07.227745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.011 [2024-07-15 16:59:07.227783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.011 [2024-07-15 16:59:07.244961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.011 [2024-07-15 16:59:07.245023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.011 [2024-07-15 16:59:07.255022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.011 [2024-07-15 16:59:07.255074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.011 [2024-07-15 16:59:07.266557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.011 [2024-07-15 16:59:07.266598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.011 [2024-07-15 16:59:07.277176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.011 [2024-07-15 16:59:07.277225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.011 [2024-07-15 16:59:07.287858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.011 [2024-07-15 16:59:07.287924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.011 [2024-07-15 16:59:07.300327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.011 [2024-07-15 16:59:07.300411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.310476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.310521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.324156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.324247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.338656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.338706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.347689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.347726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.360683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.360751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.371719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.371757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.382969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.383008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.395662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.395703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.412475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.412518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.430277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.430346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.441329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.441400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.456930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.456983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.472895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.472958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.483068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.483131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.495926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.496006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.509359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.509443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.524578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.524651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.540238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.540292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.549994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.550046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.269 [2024-07-15 16:59:07.562564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.269 [2024-07-15 16:59:07.562602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.573163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.573201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.587808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.587849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.605445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.605505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.616399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.616450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.627555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.627610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.638488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.638539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.649784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.649825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.662693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.662741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.680282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.680343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.695961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.696021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.705486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.705537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.721411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.721457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.731551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.731599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.746161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.746219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.763597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.763657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.773757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.773807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.785683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.785726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.796483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.796524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.528 [2024-07-15 16:59:07.813630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.528 [2024-07-15 16:59:07.813685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.831057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.831119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.841677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.841746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.853045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.853086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.864531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.864581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.882071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.882123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.898966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.899017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.915561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.915608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.931945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.931999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.948470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.948522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.958488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.958536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.973172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.973220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.982862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.982898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:07.997659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:07.997725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:08.008101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:08.008152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:08.024317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:08.024401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:08.039705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:08.039757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:08.049450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:08.049487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:08.061207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:08.061249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.787 [2024-07-15 16:59:08.076327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.787 [2024-07-15 16:59:08.076407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.045 [2024-07-15 16:59:08.093115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.045 [2024-07-15 16:59:08.093173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.045 [2024-07-15 16:59:08.109482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.045 [2024-07-15 16:59:08.109529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.126833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.126884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.137827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.137886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.150772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.150811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.161267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.161319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.176044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.176097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.192874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.192932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.202727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.202779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.213947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.214000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.225386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.225435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.236353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.236435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.253485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.253539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.269854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.269939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.279898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.279934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.294167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.294204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.304165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.304202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.319703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.319740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.330485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.330520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.046 [2024-07-15 16:59:08.341702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.046 [2024-07-15 16:59:08.341739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.304 [2024-07-15 16:59:08.361577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.304 [2024-07-15 16:59:08.361619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.304 [2024-07-15 16:59:08.372060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.304 [2024-07-15 16:59:08.372112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.304 [2024-07-15 16:59:08.382948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.304 [2024-07-15 16:59:08.382999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.304 [2024-07-15 16:59:08.395293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.395346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.405253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.405306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.412728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.412779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 00:10:18.305 Latency(us) 00:10:18.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.305 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:18.305 Nvme1n1 : 5.01 11431.75 89.31 0.00 0.00 11181.16 4319.42 20375.74 00:10:18.305 =================================================================================================================== 00:10:18.305 Total : 11431.75 89.31 0.00 0.00 11181.16 4319.42 20375.74 00:10:18.305 [2024-07-15 16:59:08.419784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.419822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.427771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.427803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.435771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.435803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.443783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.443818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.451787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.451823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.459798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.459834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.467796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.467834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.475799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.475837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.487817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.487856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.495805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.495842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.503808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.503845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.511809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.511846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.519807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.519841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.531819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.531859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.539809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.539843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.547804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.547839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.555807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.555834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.563804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.563832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.571819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.571853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.579838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.579884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.587844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.587885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.305 [2024-07-15 16:59:08.595862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.305 [2024-07-15 16:59:08.595912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.564 [2024-07-15 16:59:08.603899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.564 [2024-07-15 16:59:08.603996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.564 [2024-07-15 16:59:08.615862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.564 [2024-07-15 16:59:08.615909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.564 [2024-07-15 16:59:08.623832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.564 [2024-07-15 16:59:08.623862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.564 [2024-07-15 16:59:08.631828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.564 [2024-07-15 16:59:08.631856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.564 [2024-07-15 16:59:08.639829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.564 [2024-07-15 16:59:08.639857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.564 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67721) - No such process 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67721 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.564 delay0 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.564 16:59:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:18.564 [2024-07-15 16:59:08.832111] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:25.131 Initializing NVMe Controllers 00:10:25.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:25.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:25.131 Initialization complete. Launching workers. 00:10:25.131 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 95 00:10:25.131 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 382, failed to submit 33 00:10:25.131 success 240, unsuccess 142, failed 0 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.131 rmmod nvme_tcp 00:10:25.131 rmmod nvme_fabrics 00:10:25.131 rmmod nvme_keyring 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67572 ']' 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67572 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67572 ']' 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67572 00:10:25.131 16:59:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67572 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:25.131 killing process with pid 67572 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67572' 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67572 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67572 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:25.131 00:10:25.131 real 0m24.893s 00:10:25.131 user 0m40.494s 00:10:25.131 sys 0m7.101s 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:25.131 ************************************ 00:10:25.131 END TEST nvmf_zcopy 00:10:25.131 ************************************ 00:10:25.131 16:59:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.131 16:59:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:25.131 16:59:15 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:25.131 16:59:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:25.131 16:59:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.131 16:59:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:25.131 ************************************ 00:10:25.131 START TEST nvmf_nmic 00:10:25.131 ************************************ 00:10:25.131 16:59:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:25.389 * Looking for test storage... 00:10:25.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:10:25.389 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:25.390 Cannot find device "nvmf_tgt_br" 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:25.390 Cannot find device "nvmf_tgt_br2" 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:25.390 Cannot find device "nvmf_tgt_br" 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:25.390 Cannot find device "nvmf_tgt_br2" 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:25.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:25.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:25.390 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:25.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:10:25.648 00:10:25.648 --- 10.0.0.2 ping statistics --- 00:10:25.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.648 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:25.648 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:25.648 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:10:25.648 00:10:25.648 --- 10.0.0.3 ping statistics --- 00:10:25.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.648 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:25.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:25.648 00:10:25.648 --- 10.0.0.1 ping statistics --- 00:10:25.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.648 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68046 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68046 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68046 ']' 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.648 16:59:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.648 [2024-07-15 16:59:15.863791] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:25.648 [2024-07-15 16:59:15.863893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.907 [2024-07-15 16:59:16.001545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.907 [2024-07-15 16:59:16.125575] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.907 [2024-07-15 16:59:16.125887] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.907 [2024-07-15 16:59:16.125997] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.907 [2024-07-15 16:59:16.126080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.907 [2024-07-15 16:59:16.126160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.907 [2024-07-15 16:59:16.126436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.907 [2024-07-15 16:59:16.126579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.907 [2024-07-15 16:59:16.127135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.907 [2024-07-15 16:59:16.127149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.907 [2024-07-15 16:59:16.184050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.837 [2024-07-15 16:59:16.901551] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.837 Malloc0 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.837 [2024-07-15 16:59:16.973565] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.837 test case1: single bdev can't be used in multiple subsystems 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.837 16:59:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.837 [2024-07-15 16:59:16.997440] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:26.837 [2024-07-15 16:59:16.997480] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:26.837 [2024-07-15 16:59:16.997493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.837 request: 00:10:26.837 { 00:10:26.837 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:26.837 "namespace": { 00:10:26.837 "bdev_name": "Malloc0", 00:10:26.837 "no_auto_visible": false 00:10:26.837 }, 00:10:26.837 "method": "nvmf_subsystem_add_ns", 00:10:26.837 "req_id": 1 00:10:26.837 } 00:10:26.837 Got JSON-RPC error response 00:10:26.837 response: 00:10:26.837 { 00:10:26.837 "code": -32602, 00:10:26.837 "message": "Invalid parameters" 00:10:26.837 } 00:10:26.837 16:59:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:26.837 16:59:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:26.837 16:59:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:26.837 Adding namespace failed - expected result. 00:10:26.837 16:59:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:26.838 test case2: host connect to nvmf target in multiple paths 00:10:26.838 16:59:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:26.838 16:59:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:26.838 16:59:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.838 16:59:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.838 [2024-07-15 16:59:17.009572] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:26.838 16:59:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.838 16:59:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid=0b4e8503-7bac-4879-926a-209303c4b3da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:27.095 16:59:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid=0b4e8503-7bac-4879-926a-209303c4b3da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:27.095 16:59:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:27.095 16:59:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:27.095 16:59:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:27.095 16:59:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:27.095 16:59:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:29.055 16:59:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:29.055 16:59:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:29.055 16:59:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:29.055 16:59:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:29.055 16:59:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:29.055 16:59:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:29.055 16:59:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:29.055 [global] 00:10:29.055 thread=1 00:10:29.055 invalidate=1 00:10:29.055 rw=write 00:10:29.055 time_based=1 00:10:29.055 runtime=1 00:10:29.055 ioengine=libaio 00:10:29.055 direct=1 00:10:29.055 bs=4096 00:10:29.055 iodepth=1 00:10:29.055 norandommap=0 00:10:29.055 numjobs=1 00:10:29.055 00:10:29.055 verify_dump=1 00:10:29.055 verify_backlog=512 00:10:29.055 verify_state_save=0 00:10:29.055 do_verify=1 00:10:29.055 verify=crc32c-intel 00:10:29.055 [job0] 00:10:29.055 filename=/dev/nvme0n1 00:10:29.055 Could not set queue depth (nvme0n1) 00:10:29.313 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.313 fio-3.35 00:10:29.313 Starting 1 thread 00:10:30.690 00:10:30.690 job0: (groupid=0, jobs=1): err= 0: pid=68138: Mon Jul 15 16:59:20 2024 00:10:30.690 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:30.690 slat (nsec): min=12751, max=42182, avg=14671.33, stdev=2156.86 00:10:30.690 clat (usec): min=140, max=258, avg=172.07, stdev=12.69 00:10:30.690 lat (usec): min=153, max=271, avg=186.74, stdev=12.78 00:10:30.690 clat percentiles (usec): 00:10:30.690 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:10:30.690 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:10:30.690 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 188], 95.00th=[ 194], 00:10:30.690 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 223], 99.95th=[ 243], 00:10:30.690 | 99.99th=[ 260] 00:10:30.690 write: IOPS=3262, BW=12.7MiB/s (13.4MB/s)(12.8MiB/1001msec); 0 zone resets 00:10:30.690 slat (usec): min=17, max=137, avg=21.62, stdev= 5.77 00:10:30.690 clat (usec): min=83, max=182, avg=105.49, stdev=10.04 00:10:30.690 lat (usec): min=105, max=278, avg=127.11, stdev=13.07 00:10:30.690 clat percentiles (usec): 00:10:30.691 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 98], 00:10:30.691 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 106], 00:10:30.691 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 124], 00:10:30.691 | 99.00th=[ 139], 99.50th=[ 145], 99.90th=[ 153], 99.95th=[ 176], 00:10:30.691 | 99.99th=[ 184] 00:10:30.691 bw ( KiB/s): min=12856, max=12856, per=98.51%, avg=12856.00, stdev= 0.00, samples=1 00:10:30.691 iops : min= 3214, max= 3214, avg=3214.00, stdev= 0.00, samples=1 00:10:30.691 lat (usec) : 100=14.93%, 250=85.06%, 500=0.02% 00:10:30.691 cpu : usr=2.30%, sys=9.00%, ctx=6338, majf=0, minf=2 00:10:30.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.691 issued rwts: total=3072,3266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.691 00:10:30.691 Run status group 0 (all jobs): 00:10:30.691 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:30.691 WRITE: bw=12.7MiB/s (13.4MB/s), 12.7MiB/s-12.7MiB/s (13.4MB/s-13.4MB/s), io=12.8MiB (13.4MB), run=1001-1001msec 00:10:30.691 00:10:30.691 Disk stats (read/write): 00:10:30.691 nvme0n1: ios=2702/3072, merge=0/0, ticks=488/364, in_queue=852, util=91.27% 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:30.691 rmmod nvme_tcp 00:10:30.691 rmmod nvme_fabrics 00:10:30.691 rmmod nvme_keyring 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68046 ']' 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68046 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68046 ']' 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68046 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68046 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:30.691 killing process with pid 68046 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68046' 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68046 00:10:30.691 16:59:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68046 00:10:30.950 16:59:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:30.950 16:59:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:30.950 16:59:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:30.950 16:59:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:30.950 16:59:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:30.950 16:59:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.950 16:59:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.950 16:59:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.950 16:59:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:30.950 00:10:30.950 real 0m5.707s 00:10:30.950 user 0m18.238s 00:10:30.950 sys 0m2.284s 00:10:30.950 16:59:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.950 ************************************ 00:10:30.950 END TEST nvmf_nmic 00:10:30.950 ************************************ 00:10:30.950 16:59:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.950 16:59:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:30.950 16:59:21 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:30.950 16:59:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:30.950 16:59:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.950 16:59:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:30.950 ************************************ 00:10:30.950 START TEST nvmf_fio_target 00:10:30.950 ************************************ 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:30.950 * Looking for test storage... 00:10:30.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:10:30.950 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:30.951 Cannot find device "nvmf_tgt_br" 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:30.951 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:31.210 Cannot find device "nvmf_tgt_br2" 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:31.210 Cannot find device "nvmf_tgt_br" 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:31.210 Cannot find device "nvmf_tgt_br2" 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:31.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:31.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:31.210 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:31.468 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:31.468 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:31.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:10:31.469 00:10:31.469 --- 10.0.0.2 ping statistics --- 00:10:31.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.469 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:31.469 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:31.469 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:31.469 00:10:31.469 --- 10.0.0.3 ping statistics --- 00:10:31.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.469 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:31.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:10:31.469 00:10:31.469 --- 10.0.0.1 ping statistics --- 00:10:31.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.469 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68315 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68315 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68315 ']' 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.469 16:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.469 [2024-07-15 16:59:21.602262] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:31.469 [2024-07-15 16:59:21.602566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.469 [2024-07-15 16:59:21.737918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.728 [2024-07-15 16:59:21.832766] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.728 [2024-07-15 16:59:21.833049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.728 [2024-07-15 16:59:21.833204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.728 [2024-07-15 16:59:21.833257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.728 [2024-07-15 16:59:21.833351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.728 [2024-07-15 16:59:21.833861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.728 [2024-07-15 16:59:21.834037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.728 [2024-07-15 16:59:21.834191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.728 [2024-07-15 16:59:21.834556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.728 [2024-07-15 16:59:21.887015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:32.739 16:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.739 16:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:32.739 16:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.739 16:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:32.739 16:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.739 16:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.739 16:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:32.740 [2024-07-15 16:59:22.878768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.740 16:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.998 16:59:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:32.998 16:59:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.256 16:59:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:33.256 16:59:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.514 16:59:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:33.514 16:59:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.772 16:59:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:33.772 16:59:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:34.031 16:59:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.290 16:59:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:34.290 16:59:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.549 16:59:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:34.549 16:59:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.808 16:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:34.808 16:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:35.067 16:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:35.326 16:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:35.326 16:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:35.584 16:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:35.584 16:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.844 16:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.111 [2024-07-15 16:59:26.213791] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.111 16:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:36.370 16:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:36.630 16:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid=0b4e8503-7bac-4879-926a-209303c4b3da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:36.630 16:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:36.630 16:59:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:36.630 16:59:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.630 16:59:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:36.630 16:59:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:36.630 16:59:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:39.187 16:59:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:39.187 16:59:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:39.187 16:59:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:39.187 16:59:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:39.187 16:59:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:39.187 16:59:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:39.187 16:59:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:39.187 [global] 00:10:39.187 thread=1 00:10:39.187 invalidate=1 00:10:39.187 rw=write 00:10:39.187 time_based=1 00:10:39.187 runtime=1 00:10:39.187 ioengine=libaio 00:10:39.187 direct=1 00:10:39.187 bs=4096 00:10:39.187 iodepth=1 00:10:39.187 norandommap=0 00:10:39.187 numjobs=1 00:10:39.187 00:10:39.187 verify_dump=1 00:10:39.187 verify_backlog=512 00:10:39.187 verify_state_save=0 00:10:39.187 do_verify=1 00:10:39.187 verify=crc32c-intel 00:10:39.187 [job0] 00:10:39.187 filename=/dev/nvme0n1 00:10:39.187 [job1] 00:10:39.187 filename=/dev/nvme0n2 00:10:39.187 [job2] 00:10:39.187 filename=/dev/nvme0n3 00:10:39.187 [job3] 00:10:39.187 filename=/dev/nvme0n4 00:10:39.187 Could not set queue depth (nvme0n1) 00:10:39.187 Could not set queue depth (nvme0n2) 00:10:39.187 Could not set queue depth (nvme0n3) 00:10:39.187 Could not set queue depth (nvme0n4) 00:10:39.187 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.187 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.187 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.187 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.187 fio-3.35 00:10:39.187 Starting 4 threads 00:10:40.122 00:10:40.122 job0: (groupid=0, jobs=1): err= 0: pid=68499: Mon Jul 15 16:59:30 2024 00:10:40.122 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:40.122 slat (nsec): min=8314, max=51274, avg=14424.07, stdev=5009.96 00:10:40.122 clat (usec): min=136, max=5748, avg=197.79, stdev=143.23 00:10:40.122 lat (usec): min=149, max=5767, avg=212.21, stdev=143.07 00:10:40.122 clat percentiles (usec): 00:10:40.122 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:10:40.122 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 180], 00:10:40.122 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 269], 00:10:40.122 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 1729], 99.95th=[ 3752], 00:10:40.122 | 99.99th=[ 5735] 00:10:40.122 write: IOPS=2798, BW=10.9MiB/s (11.5MB/s)(10.9MiB/1001msec); 0 zone resets 00:10:40.122 slat (usec): min=10, max=113, avg=20.58, stdev= 5.64 00:10:40.122 clat (usec): min=89, max=7225, avg=139.26, stdev=144.40 00:10:40.122 lat (usec): min=108, max=7242, avg=159.84, stdev=144.21 00:10:40.122 clat percentiles (usec): 00:10:40.122 | 1.00th=[ 102], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 116], 00:10:40.122 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 129], 00:10:40.122 | 70.00th=[ 137], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 188], 00:10:40.122 | 99.00th=[ 206], 99.50th=[ 231], 99.90th=[ 938], 99.95th=[ 2409], 00:10:40.122 | 99.99th=[ 7242] 00:10:40.122 bw ( KiB/s): min=12288, max=12288, per=31.31%, avg=12288.00, stdev= 0.00, samples=1 00:10:40.122 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:40.122 lat (usec) : 100=0.43%, 250=94.46%, 500=4.92%, 750=0.04%, 1000=0.04% 00:10:40.122 lat (msec) : 2=0.04%, 4=0.04%, 10=0.04% 00:10:40.122 cpu : usr=1.90%, sys=7.60%, ctx=5363, majf=0, minf=17 00:10:40.122 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.122 issued rwts: total=2560,2801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.122 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.122 job1: (groupid=0, jobs=1): err= 0: pid=68500: Mon Jul 15 16:59:30 2024 00:10:40.122 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:40.122 slat (usec): min=12, max=108, avg=18.69, stdev= 6.65 00:10:40.122 clat (usec): min=164, max=2831, avg=265.49, stdev=77.89 00:10:40.122 lat (usec): min=178, max=2865, avg=284.18, stdev=78.71 00:10:40.122 clat percentiles (usec): 00:10:40.122 | 1.00th=[ 182], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 241], 00:10:40.122 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:10:40.122 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 338], 00:10:40.123 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[ 840], 99.95th=[ 1057], 00:10:40.123 | 99.99th=[ 2835] 00:10:40.123 write: IOPS=2052, BW=8212KiB/s (8409kB/s)(8220KiB/1001msec); 0 zone resets 00:10:40.123 slat (usec): min=17, max=122, avg=23.32, stdev= 8.73 00:10:40.123 clat (usec): min=87, max=368, avg=175.78, stdev=37.24 00:10:40.123 lat (usec): min=106, max=490, avg=199.10, stdev=38.78 00:10:40.123 clat percentiles (usec): 00:10:40.123 | 1.00th=[ 96], 5.00th=[ 103], 10.00th=[ 111], 20.00th=[ 131], 00:10:40.123 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194], 00:10:40.123 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 219], 00:10:40.123 | 99.00th=[ 237], 99.50th=[ 245], 99.90th=[ 289], 99.95th=[ 318], 00:10:40.123 | 99.99th=[ 367] 00:10:40.123 bw ( KiB/s): min= 8192, max= 8192, per=20.88%, avg=8192.00, stdev= 0.00, samples=1 00:10:40.123 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:40.123 lat (usec) : 100=1.44%, 250=65.32%, 500=32.76%, 750=0.39%, 1000=0.05% 00:10:40.123 lat (msec) : 2=0.02%, 4=0.02% 00:10:40.123 cpu : usr=1.20%, sys=7.50%, ctx=4104, majf=0, minf=7 00:10:40.123 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.123 issued rwts: total=2048,2055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.123 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.123 job2: (groupid=0, jobs=1): err= 0: pid=68501: Mon Jul 15 16:59:30 2024 00:10:40.123 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:40.123 slat (nsec): min=11018, max=32287, avg=13252.02, stdev=1414.34 00:10:40.123 clat (usec): min=146, max=465, avg=193.56, stdev=37.82 00:10:40.123 lat (usec): min=159, max=477, avg=206.81, stdev=37.95 00:10:40.123 clat percentiles (usec): 00:10:40.123 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:10:40.123 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:10:40.123 | 70.00th=[ 217], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 253], 00:10:40.123 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 449], 99.95th=[ 449], 00:10:40.123 | 99.99th=[ 465] 00:10:40.123 write: IOPS=2913, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:10:40.123 slat (usec): min=10, max=103, avg=18.34, stdev= 4.30 00:10:40.123 clat (usec): min=101, max=1542, avg=140.16, stdev=40.52 00:10:40.123 lat (usec): min=119, max=1561, avg=158.50, stdev=40.26 00:10:40.123 clat percentiles (usec): 00:10:40.123 | 1.00th=[ 106], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 118], 00:10:40.123 | 30.00th=[ 122], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 135], 00:10:40.123 | 70.00th=[ 147], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 190], 00:10:40.123 | 99.00th=[ 210], 99.50th=[ 227], 99.90th=[ 461], 99.95th=[ 783], 00:10:40.123 | 99.99th=[ 1549] 00:10:40.123 bw ( KiB/s): min=12288, max=12288, per=31.31%, avg=12288.00, stdev= 0.00, samples=1 00:10:40.123 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:40.123 lat (usec) : 250=96.86%, 500=3.10%, 1000=0.02% 00:10:40.123 lat (msec) : 2=0.02% 00:10:40.123 cpu : usr=1.90%, sys=6.90%, ctx=5477, majf=0, minf=3 00:10:40.123 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.123 issued rwts: total=2560,2916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.123 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.123 job3: (groupid=0, jobs=1): err= 0: pid=68502: Mon Jul 15 16:59:30 2024 00:10:40.123 read: IOPS=1737, BW=6949KiB/s (7116kB/s)(6956KiB/1001msec) 00:10:40.123 slat (nsec): min=12829, max=58256, avg=19119.80, stdev=6008.08 00:10:40.123 clat (usec): min=165, max=504, avg=270.73, stdev=41.73 00:10:40.123 lat (usec): min=181, max=532, avg=289.85, stdev=44.45 00:10:40.123 clat percentiles (usec): 00:10:40.123 | 1.00th=[ 198], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 249], 00:10:40.123 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:10:40.123 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 306], 95.00th=[ 363], 00:10:40.123 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 482], 99.95th=[ 506], 00:10:40.123 | 99.99th=[ 506] 00:10:40.123 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:40.123 slat (usec): min=17, max=446, avg=28.65, stdev=13.38 00:10:40.123 clat (usec): min=100, max=506, avg=209.28, stdev=46.81 00:10:40.123 lat (usec): min=127, max=621, avg=237.93, stdev=52.26 00:10:40.123 clat percentiles (usec): 00:10:40.123 | 1.00th=[ 123], 5.00th=[ 137], 10.00th=[ 172], 20.00th=[ 184], 00:10:40.123 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:10:40.123 | 70.00th=[ 212], 80.00th=[ 225], 90.00th=[ 289], 95.00th=[ 310], 00:10:40.123 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 379], 99.95th=[ 400], 00:10:40.123 | 99.99th=[ 506] 00:10:40.123 bw ( KiB/s): min= 8192, max= 8192, per=20.88%, avg=8192.00, stdev= 0.00, samples=1 00:10:40.123 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:40.123 lat (usec) : 250=55.90%, 500=44.05%, 750=0.05% 00:10:40.123 cpu : usr=2.00%, sys=7.00%, ctx=3790, majf=0, minf=8 00:10:40.123 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.123 issued rwts: total=1739,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.123 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.123 00:10:40.123 Run status group 0 (all jobs): 00:10:40.123 READ: bw=34.8MiB/s (36.4MB/s), 6949KiB/s-9.99MiB/s (7116kB/s-10.5MB/s), io=34.8MiB (36.5MB), run=1001-1001msec 00:10:40.123 WRITE: bw=38.3MiB/s (40.2MB/s), 8184KiB/s-11.4MiB/s (8380kB/s-11.9MB/s), io=38.4MiB (40.2MB), run=1001-1001msec 00:10:40.123 00:10:40.123 Disk stats (read/write): 00:10:40.123 nvme0n1: ios=2238/2560, merge=0/0, ticks=449/357, in_queue=806, util=87.47% 00:10:40.123 nvme0n2: ios=1575/2043, merge=0/0, ticks=447/376, in_queue=823, util=87.93% 00:10:40.123 nvme0n3: ios=2251/2560, merge=0/0, ticks=425/347, in_queue=772, util=89.18% 00:10:40.123 nvme0n4: ios=1536/1705, merge=0/0, ticks=420/369, in_queue=789, util=89.74% 00:10:40.123 16:59:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:40.123 [global] 00:10:40.123 thread=1 00:10:40.123 invalidate=1 00:10:40.123 rw=randwrite 00:10:40.123 time_based=1 00:10:40.123 runtime=1 00:10:40.123 ioengine=libaio 00:10:40.123 direct=1 00:10:40.123 bs=4096 00:10:40.123 iodepth=1 00:10:40.123 norandommap=0 00:10:40.123 numjobs=1 00:10:40.123 00:10:40.123 verify_dump=1 00:10:40.123 verify_backlog=512 00:10:40.123 verify_state_save=0 00:10:40.123 do_verify=1 00:10:40.123 verify=crc32c-intel 00:10:40.123 [job0] 00:10:40.123 filename=/dev/nvme0n1 00:10:40.123 [job1] 00:10:40.123 filename=/dev/nvme0n2 00:10:40.123 [job2] 00:10:40.123 filename=/dev/nvme0n3 00:10:40.123 [job3] 00:10:40.123 filename=/dev/nvme0n4 00:10:40.123 Could not set queue depth (nvme0n1) 00:10:40.123 Could not set queue depth (nvme0n2) 00:10:40.123 Could not set queue depth (nvme0n3) 00:10:40.123 Could not set queue depth (nvme0n4) 00:10:40.382 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.382 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.382 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.382 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.382 fio-3.35 00:10:40.382 Starting 4 threads 00:10:41.758 00:10:41.758 job0: (groupid=0, jobs=1): err= 0: pid=68555: Mon Jul 15 16:59:31 2024 00:10:41.758 read: IOPS=2151, BW=8607KiB/s (8814kB/s)(8616KiB/1001msec) 00:10:41.758 slat (nsec): min=12033, max=40457, avg=16439.43, stdev=3476.44 00:10:41.758 clat (usec): min=135, max=7552, avg=225.68, stdev=190.28 00:10:41.758 lat (usec): min=150, max=7568, avg=242.12, stdev=189.99 00:10:41.758 clat percentiles (usec): 00:10:41.758 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:10:41.758 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 233], 60.00th=[ 249], 00:10:41.758 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 334], 00:10:41.758 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 2933], 99.95th=[ 3261], 00:10:41.758 | 99.99th=[ 7570] 00:10:41.758 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:41.758 slat (usec): min=17, max=123, avg=23.95, stdev= 8.06 00:10:41.758 clat (usec): min=85, max=5200, avg=159.18, stdev=116.24 00:10:41.758 lat (usec): min=114, max=5222, avg=183.13, stdev=116.98 00:10:41.758 clat percentiles (usec): 00:10:41.758 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 118], 00:10:41.758 | 30.00th=[ 123], 40.00th=[ 128], 50.00th=[ 139], 60.00th=[ 178], 00:10:41.758 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 215], 00:10:41.758 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 494], 99.95th=[ 2212], 00:10:41.758 | 99.99th=[ 5211] 00:10:41.758 bw ( KiB/s): min=12288, max=12288, per=33.37%, avg=12288.00, stdev= 0.00, samples=1 00:10:41.758 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:41.758 lat (usec) : 100=0.21%, 250=80.23%, 500=19.45% 00:10:41.758 lat (msec) : 4=0.06%, 10=0.04% 00:10:41.758 cpu : usr=1.90%, sys=7.80%, ctx=4719, majf=0, minf=11 00:10:41.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.758 issued rwts: total=2154,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.758 job1: (groupid=0, jobs=1): err= 0: pid=68556: Mon Jul 15 16:59:31 2024 00:10:41.758 read: IOPS=1870, BW=7481KiB/s (7660kB/s)(7488KiB/1001msec) 00:10:41.758 slat (nsec): min=11405, max=50644, avg=16124.64, stdev=5525.25 00:10:41.758 clat (usec): min=151, max=862, avg=312.60, stdev=88.07 00:10:41.758 lat (usec): min=174, max=900, avg=328.72, stdev=91.64 00:10:41.758 clat percentiles (usec): 00:10:41.758 | 1.00th=[ 188], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 258], 00:10:41.758 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:10:41.758 | 70.00th=[ 297], 80.00th=[ 367], 90.00th=[ 482], 95.00th=[ 494], 00:10:41.758 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 734], 99.95th=[ 865], 00:10:41.758 | 99.99th=[ 865] 00:10:41.758 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:41.758 slat (usec): min=16, max=123, avg=19.40, stdev= 4.47 00:10:41.758 clat (usec): min=95, max=1612, avg=164.77, stdev=55.40 00:10:41.758 lat (usec): min=114, max=1641, avg=184.16, stdev=56.09 00:10:41.758 clat percentiles (usec): 00:10:41.758 | 1.00th=[ 101], 5.00th=[ 108], 10.00th=[ 113], 20.00th=[ 118], 00:10:41.758 | 30.00th=[ 124], 40.00th=[ 133], 50.00th=[ 184], 60.00th=[ 190], 00:10:41.758 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 219], 00:10:41.758 | 99.00th=[ 245], 99.50th=[ 330], 99.90th=[ 510], 99.95th=[ 553], 00:10:41.758 | 99.99th=[ 1614] 00:10:41.758 bw ( KiB/s): min= 8192, max= 8192, per=22.24%, avg=8192.00, stdev= 0.00, samples=1 00:10:41.758 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:41.758 lat (usec) : 100=0.48%, 250=57.35%, 500=40.10%, 750=2.02%, 1000=0.03% 00:10:41.758 lat (msec) : 2=0.03% 00:10:41.758 cpu : usr=1.50%, sys=5.80%, ctx=3920, majf=0, minf=9 00:10:41.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.758 issued rwts: total=1872,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.758 job2: (groupid=0, jobs=1): err= 0: pid=68557: Mon Jul 15 16:59:31 2024 00:10:41.758 read: IOPS=2225, BW=8903KiB/s (9117kB/s)(8912KiB/1001msec) 00:10:41.758 slat (nsec): min=11227, max=38513, avg=13849.72, stdev=2522.94 00:10:41.758 clat (usec): min=140, max=488, avg=226.57, stdev=63.06 00:10:41.758 lat (usec): min=153, max=503, avg=240.42, stdev=63.41 00:10:41.758 clat percentiles (usec): 00:10:41.758 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:10:41.758 | 30.00th=[ 172], 40.00th=[ 184], 50.00th=[ 237], 60.00th=[ 253], 00:10:41.758 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 314], 95.00th=[ 330], 00:10:41.758 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 478], 99.95th=[ 486], 00:10:41.758 | 99.99th=[ 490] 00:10:41.758 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:41.758 slat (usec): min=13, max=122, avg=20.72, stdev= 5.68 00:10:41.758 clat (usec): min=97, max=1546, avg=157.39, stdev=45.04 00:10:41.758 lat (usec): min=118, max=1565, avg=178.10, stdev=44.99 00:10:41.758 clat percentiles (usec): 00:10:41.758 | 1.00th=[ 105], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 124], 00:10:41.758 | 30.00th=[ 129], 40.00th=[ 135], 50.00th=[ 145], 60.00th=[ 178], 00:10:41.758 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 206], 00:10:41.758 | 99.00th=[ 229], 99.50th=[ 247], 99.90th=[ 478], 99.95th=[ 570], 00:10:41.758 | 99.99th=[ 1549] 00:10:41.758 bw ( KiB/s): min=12288, max=12288, per=33.37%, avg=12288.00, stdev= 0.00, samples=1 00:10:41.758 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:41.758 lat (usec) : 100=0.02%, 250=80.03%, 500=19.90%, 750=0.02% 00:10:41.758 lat (msec) : 2=0.02% 00:10:41.758 cpu : usr=1.80%, sys=6.70%, ctx=4788, majf=0, minf=10 00:10:41.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.758 issued rwts: total=2228,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.758 job3: (groupid=0, jobs=1): err= 0: pid=68558: Mon Jul 15 16:59:31 2024 00:10:41.758 read: IOPS=1730, BW=6921KiB/s (7087kB/s)(6928KiB/1001msec) 00:10:41.758 slat (nsec): min=13214, max=52049, avg=18140.42, stdev=5393.77 00:10:41.758 clat (usec): min=171, max=861, avg=296.88, stdev=62.94 00:10:41.758 lat (usec): min=195, max=897, avg=315.02, stdev=66.04 00:10:41.758 clat percentiles (usec): 00:10:41.758 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 255], 00:10:41.758 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:10:41.758 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 363], 95.00th=[ 453], 00:10:41.758 | 99.00th=[ 494], 99.50th=[ 506], 99.90th=[ 758], 99.95th=[ 865], 00:10:41.758 | 99.99th=[ 865] 00:10:41.759 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:41.759 slat (usec): min=17, max=180, avg=27.33, stdev=11.16 00:10:41.759 clat (usec): min=71, max=1210, avg=190.41, stdev=72.18 00:10:41.759 lat (usec): min=133, max=1235, avg=217.74, stdev=78.56 00:10:41.759 clat percentiles (usec): 00:10:41.759 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 126], 20.00th=[ 133], 00:10:41.759 | 30.00th=[ 139], 40.00th=[ 157], 50.00th=[ 176], 60.00th=[ 186], 00:10:41.759 | 70.00th=[ 200], 80.00th=[ 239], 90.00th=[ 310], 95.00th=[ 343], 00:10:41.759 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 523], 99.95th=[ 523], 00:10:41.759 | 99.99th=[ 1205] 00:10:41.759 bw ( KiB/s): min= 8192, max= 8192, per=22.24%, avg=8192.00, stdev= 0.00, samples=1 00:10:41.759 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:41.759 lat (usec) : 100=0.03%, 250=50.53%, 500=48.99%, 750=0.37%, 1000=0.05% 00:10:41.759 lat (msec) : 2=0.03% 00:10:41.759 cpu : usr=1.50%, sys=7.10%, ctx=3781, majf=0, minf=15 00:10:41.759 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.759 issued rwts: total=1732,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.759 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.759 00:10:41.759 Run status group 0 (all jobs): 00:10:41.759 READ: bw=31.2MiB/s (32.7MB/s), 6921KiB/s-8903KiB/s (7087kB/s-9117kB/s), io=31.2MiB (32.7MB), run=1001-1001msec 00:10:41.759 WRITE: bw=36.0MiB/s (37.7MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=36.0MiB (37.7MB), run=1001-1001msec 00:10:41.759 00:10:41.759 Disk stats (read/write): 00:10:41.759 nvme0n1: ios=2059/2048, merge=0/0, ticks=482/330, in_queue=812, util=87.15% 00:10:41.759 nvme0n2: ios=1559/1915, merge=0/0, ticks=494/328, in_queue=822, util=88.07% 00:10:41.759 nvme0n3: ios=2048/2132, merge=0/0, ticks=454/334, in_queue=788, util=89.19% 00:10:41.759 nvme0n4: ios=1536/1569, merge=0/0, ticks=475/324, in_queue=799, util=89.75% 00:10:41.759 16:59:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:41.759 [global] 00:10:41.759 thread=1 00:10:41.759 invalidate=1 00:10:41.759 rw=write 00:10:41.759 time_based=1 00:10:41.759 runtime=1 00:10:41.759 ioengine=libaio 00:10:41.759 direct=1 00:10:41.759 bs=4096 00:10:41.759 iodepth=128 00:10:41.759 norandommap=0 00:10:41.759 numjobs=1 00:10:41.759 00:10:41.759 verify_dump=1 00:10:41.759 verify_backlog=512 00:10:41.759 verify_state_save=0 00:10:41.759 do_verify=1 00:10:41.759 verify=crc32c-intel 00:10:41.759 [job0] 00:10:41.759 filename=/dev/nvme0n1 00:10:41.759 [job1] 00:10:41.759 filename=/dev/nvme0n2 00:10:41.759 [job2] 00:10:41.759 filename=/dev/nvme0n3 00:10:41.759 [job3] 00:10:41.759 filename=/dev/nvme0n4 00:10:41.759 Could not set queue depth (nvme0n1) 00:10:41.759 Could not set queue depth (nvme0n2) 00:10:41.759 Could not set queue depth (nvme0n3) 00:10:41.759 Could not set queue depth (nvme0n4) 00:10:41.759 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.759 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.759 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.759 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.759 fio-3.35 00:10:41.759 Starting 4 threads 00:10:43.133 00:10:43.133 job0: (groupid=0, jobs=1): err= 0: pid=68618: Mon Jul 15 16:59:32 2024 00:10:43.133 read: IOPS=1754, BW=7020KiB/s (7188kB/s)(7048KiB/1004msec) 00:10:43.133 slat (usec): min=4, max=12776, avg=223.30, stdev=954.28 00:10:43.133 clat (usec): min=1255, max=59778, avg=29482.33, stdev=15996.23 00:10:43.133 lat (usec): min=4243, max=60389, avg=29705.63, stdev=16081.77 00:10:43.133 clat percentiles (usec): 00:10:43.133 | 1.00th=[ 9896], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:10:43.133 | 30.00th=[13566], 40.00th=[15533], 50.00th=[33424], 60.00th=[36963], 00:10:43.133 | 70.00th=[39584], 80.00th=[45876], 90.00th=[51643], 95.00th=[54789], 00:10:43.133 | 99.00th=[57934], 99.50th=[57934], 99.90th=[58459], 99.95th=[60031], 00:10:43.133 | 99.99th=[60031] 00:10:43.133 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:10:43.133 slat (usec): min=10, max=10883, avg=288.84, stdev=927.43 00:10:43.133 clat (usec): min=8205, max=85167, avg=36539.71, stdev=26629.50 00:10:43.133 lat (usec): min=9691, max=85208, avg=36828.54, stdev=26824.23 00:10:43.133 clat percentiles (usec): 00:10:43.133 | 1.00th=[ 9372], 5.00th=[10683], 10.00th=[11207], 20.00th=[11338], 00:10:43.133 | 30.00th=[11469], 40.00th=[12125], 50.00th=[14091], 60.00th=[49021], 00:10:43.133 | 70.00th=[53740], 80.00th=[63177], 90.00th=[77071], 95.00th=[83362], 00:10:43.133 | 99.00th=[85459], 99.50th=[85459], 99.90th=[85459], 99.95th=[85459], 00:10:43.133 | 99.99th=[85459] 00:10:43.133 bw ( KiB/s): min= 5360, max=11046, per=14.89%, avg=8203.00, stdev=4020.61, samples=2 00:10:43.133 iops : min= 1340, max= 2761, avg=2050.50, stdev=1004.80, samples=2 00:10:43.133 lat (msec) : 2=0.03%, 10=1.21%, 20=45.85%, 50=25.30%, 100=27.61% 00:10:43.133 cpu : usr=1.99%, sys=6.08%, ctx=437, majf=0, minf=1 00:10:43.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:10:43.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.133 issued rwts: total=1762,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.133 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.133 job1: (groupid=0, jobs=1): err= 0: pid=68619: Mon Jul 15 16:59:32 2024 00:10:43.133 read: IOPS=6381, BW=24.9MiB/s (26.1MB/s)(25.0MiB/1003msec) 00:10:43.133 slat (usec): min=7, max=2649, avg=73.81, stdev=338.15 00:10:43.133 clat (usec): min=316, max=12599, avg=9879.49, stdev=980.84 00:10:43.133 lat (usec): min=2395, max=12617, avg=9953.30, stdev=926.36 00:10:43.133 clat percentiles (usec): 00:10:43.133 | 1.00th=[ 5211], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9503], 00:10:43.133 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9896], 00:10:43.133 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10945], 95.00th=[11600], 00:10:43.133 | 99.00th=[12518], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:10:43.133 | 99.99th=[12649] 00:10:43.133 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:10:43.133 slat (usec): min=10, max=2346, avg=72.29, stdev=293.16 00:10:43.133 clat (usec): min=7076, max=12104, avg=9534.40, stdev=745.96 00:10:43.133 lat (usec): min=8552, max=12150, avg=9606.68, stdev=689.22 00:10:43.133 clat percentiles (usec): 00:10:43.133 | 1.00th=[ 7635], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9110], 00:10:43.133 | 30.00th=[ 9241], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9372], 00:10:43.133 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[10814], 95.00th=[11338], 00:10:43.133 | 99.00th=[11731], 99.50th=[11731], 99.90th=[12125], 99.95th=[12125], 00:10:43.133 | 99.99th=[12125] 00:10:43.133 bw ( KiB/s): min=25146, max=28152, per=48.39%, avg=26649.00, stdev=2125.56, samples=2 00:10:43.133 iops : min= 6286, max= 7038, avg=6662.00, stdev=531.74, samples=2 00:10:43.133 lat (usec) : 500=0.01% 00:10:43.133 lat (msec) : 4=0.25%, 10=78.33%, 20=21.42% 00:10:43.133 cpu : usr=5.19%, sys=16.47%, ctx=410, majf=0, minf=5 00:10:43.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:43.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.133 issued rwts: total=6401,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.133 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.133 job2: (groupid=0, jobs=1): err= 0: pid=68620: Mon Jul 15 16:59:32 2024 00:10:43.133 read: IOPS=3438, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1004msec) 00:10:43.133 slat (usec): min=6, max=8463, avg=147.02, stdev=767.71 00:10:43.133 clat (usec): min=3323, max=28474, avg=18956.32, stdev=4342.64 00:10:43.133 lat (usec): min=3339, max=28489, avg=19103.34, stdev=4315.05 00:10:43.133 clat percentiles (usec): 00:10:43.133 | 1.00th=[10421], 5.00th=[14222], 10.00th=[15795], 20.00th=[16057], 00:10:43.133 | 30.00th=[16188], 40.00th=[16188], 50.00th=[16450], 60.00th=[18220], 00:10:43.133 | 70.00th=[22152], 80.00th=[24773], 90.00th=[25297], 95.00th=[25822], 00:10:43.133 | 99.00th=[26870], 99.50th=[27395], 99.90th=[27395], 99.95th=[28443], 00:10:43.133 | 99.99th=[28443] 00:10:43.133 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:10:43.133 slat (usec): min=11, max=6775, avg=128.52, stdev=608.63 00:10:43.133 clat (usec): min=10130, max=26905, avg=17056.49, stdev=3761.62 00:10:43.134 lat (usec): min=12386, max=26954, avg=17185.01, stdev=3734.76 00:10:43.134 clat percentiles (usec): 00:10:43.134 | 1.00th=[11731], 5.00th=[12649], 10.00th=[12780], 20.00th=[13173], 00:10:43.134 | 30.00th=[14615], 40.00th=[15533], 50.00th=[16188], 60.00th=[16909], 00:10:43.134 | 70.00th=[18482], 80.00th=[20579], 90.00th=[23200], 95.00th=[23725], 00:10:43.134 | 99.00th=[26608], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:10:43.134 | 99.99th=[26870] 00:10:43.134 bw ( KiB/s): min=14072, max=14600, per=26.03%, avg=14336.00, stdev=373.35, samples=2 00:10:43.134 iops : min= 3518, max= 3650, avg=3584.00, stdev=93.34, samples=2 00:10:43.134 lat (msec) : 4=0.40%, 20=69.88%, 50=29.72% 00:10:43.134 cpu : usr=3.29%, sys=10.87%, ctx=242, majf=0, minf=6 00:10:43.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:43.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.134 issued rwts: total=3452,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.134 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.134 job3: (groupid=0, jobs=1): err= 0: pid=68621: Mon Jul 15 16:59:32 2024 00:10:43.134 read: IOPS=1407, BW=5629KiB/s (5765kB/s)(5652KiB/1004msec) 00:10:43.134 slat (usec): min=4, max=10003, avg=313.23, stdev=1286.62 00:10:43.134 clat (usec): min=891, max=68036, avg=37116.81, stdev=12743.60 00:10:43.134 lat (usec): min=3962, max=68052, avg=37430.04, stdev=12782.90 00:10:43.134 clat percentiles (usec): 00:10:43.134 | 1.00th=[10814], 5.00th=[19530], 10.00th=[24249], 20.00th=[25035], 00:10:43.134 | 30.00th=[29230], 40.00th=[35390], 50.00th=[36439], 60.00th=[36963], 00:10:43.134 | 70.00th=[40633], 80.00th=[46400], 90.00th=[58983], 95.00th=[62129], 00:10:43.134 | 99.00th=[66323], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:10:43.134 | 99.99th=[67634] 00:10:43.134 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:10:43.134 slat (usec): min=11, max=12323, avg=354.56, stdev=1138.15 00:10:43.134 clat (usec): min=17613, max=85284, avg=47893.14, stdev=22108.64 00:10:43.134 lat (usec): min=20420, max=85325, avg=48247.70, stdev=22251.86 00:10:43.134 clat percentiles (usec): 00:10:43.134 | 1.00th=[20317], 5.00th=[22938], 10.00th=[22938], 20.00th=[23462], 00:10:43.134 | 30.00th=[23987], 40.00th=[36439], 50.00th=[46400], 60.00th=[57934], 00:10:43.134 | 70.00th=[63177], 80.00th=[69731], 90.00th=[81265], 95.00th=[84411], 00:10:43.134 | 99.00th=[85459], 99.50th=[85459], 99.90th=[85459], 99.95th=[85459], 00:10:43.134 | 99.99th=[85459] 00:10:43.134 bw ( KiB/s): min= 4096, max= 8208, per=11.17%, avg=6152.00, stdev=2907.62, samples=2 00:10:43.134 iops : min= 1024, max= 2052, avg=1538.00, stdev=726.91, samples=2 00:10:43.134 lat (usec) : 1000=0.03% 00:10:43.134 lat (msec) : 4=0.07%, 10=0.14%, 20=2.81%, 50=64.50%, 100=32.45% 00:10:43.134 cpu : usr=1.60%, sys=5.48%, ctx=395, majf=0, minf=9 00:10:43.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:10:43.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.134 issued rwts: total=1413,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.134 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.134 00:10:43.134 Run status group 0 (all jobs): 00:10:43.134 READ: bw=50.7MiB/s (53.1MB/s), 5629KiB/s-24.9MiB/s (5765kB/s-26.1MB/s), io=50.9MiB (53.4MB), run=1003-1004msec 00:10:43.134 WRITE: bw=53.8MiB/s (56.4MB/s), 6120KiB/s-25.9MiB/s (6266kB/s-27.2MB/s), io=54.0MiB (56.6MB), run=1003-1004msec 00:10:43.134 00:10:43.134 Disk stats (read/write): 00:10:43.134 nvme0n1: ios=1585/1837, merge=0/0, ticks=9961/15601, in_queue=25562, util=87.85% 00:10:43.134 nvme0n2: ios=5539/5632, merge=0/0, ticks=12064/11263, in_queue=23327, util=88.79% 00:10:43.134 nvme0n3: ios=3008/3072, merge=0/0, ticks=13383/11297, in_queue=24680, util=89.44% 00:10:43.134 nvme0n4: ios=1024/1532, merge=0/0, ticks=9075/17111, in_queue=26186, util=89.71% 00:10:43.134 16:59:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:43.134 [global] 00:10:43.134 thread=1 00:10:43.134 invalidate=1 00:10:43.134 rw=randwrite 00:10:43.134 time_based=1 00:10:43.134 runtime=1 00:10:43.134 ioengine=libaio 00:10:43.134 direct=1 00:10:43.134 bs=4096 00:10:43.134 iodepth=128 00:10:43.134 norandommap=0 00:10:43.134 numjobs=1 00:10:43.134 00:10:43.134 verify_dump=1 00:10:43.134 verify_backlog=512 00:10:43.134 verify_state_save=0 00:10:43.134 do_verify=1 00:10:43.134 verify=crc32c-intel 00:10:43.134 [job0] 00:10:43.134 filename=/dev/nvme0n1 00:10:43.134 [job1] 00:10:43.134 filename=/dev/nvme0n2 00:10:43.134 [job2] 00:10:43.134 filename=/dev/nvme0n3 00:10:43.134 [job3] 00:10:43.134 filename=/dev/nvme0n4 00:10:43.134 Could not set queue depth (nvme0n1) 00:10:43.134 Could not set queue depth (nvme0n2) 00:10:43.134 Could not set queue depth (nvme0n3) 00:10:43.134 Could not set queue depth (nvme0n4) 00:10:43.134 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.134 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.134 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.134 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.134 fio-3.35 00:10:43.134 Starting 4 threads 00:10:44.068 00:10:44.068 job0: (groupid=0, jobs=1): err= 0: pid=68679: Mon Jul 15 16:59:34 2024 00:10:44.068 read: IOPS=5708, BW=22.3MiB/s (23.4MB/s)(22.3MiB/1001msec) 00:10:44.068 slat (usec): min=8, max=4267, avg=83.26, stdev=358.37 00:10:44.068 clat (usec): min=565, max=15285, avg=11061.51, stdev=1055.71 00:10:44.068 lat (usec): min=2105, max=15351, avg=11144.76, stdev=1057.40 00:10:44.068 clat percentiles (usec): 00:10:44.068 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10683], 00:10:44.068 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:10:44.068 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11863], 95.00th=[12387], 00:10:44.068 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14222], 99.95th=[14484], 00:10:44.068 | 99.99th=[15270] 00:10:44.068 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:10:44.068 slat (usec): min=10, max=4462, avg=78.15, stdev=444.89 00:10:44.068 clat (usec): min=5501, max=15435, avg=10331.83, stdev=941.93 00:10:44.068 lat (usec): min=6206, max=15507, avg=10409.98, stdev=1028.44 00:10:44.068 clat percentiles (usec): 00:10:44.068 | 1.00th=[ 7177], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:10:44.068 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:10:44.068 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11600], 00:10:44.068 | 99.00th=[13698], 99.50th=[14222], 99.90th=[14877], 99.95th=[15270], 00:10:44.068 | 99.99th=[15401] 00:10:44.068 bw ( KiB/s): min=24576, max=24576, per=36.16%, avg=24576.00, stdev= 0.00, samples=1 00:10:44.068 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:10:44.068 lat (usec) : 750=0.01% 00:10:44.068 lat (msec) : 4=0.15%, 10=19.72%, 20=80.12% 00:10:44.068 cpu : usr=5.50%, sys=14.50%, ctx=365, majf=0, minf=11 00:10:44.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:44.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.068 issued rwts: total=5714,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.068 job1: (groupid=0, jobs=1): err= 0: pid=68680: Mon Jul 15 16:59:34 2024 00:10:44.069 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:10:44.069 slat (usec): min=4, max=7902, avg=190.49, stdev=737.65 00:10:44.069 clat (usec): min=15404, max=38630, avg=24480.15, stdev=3108.16 00:10:44.069 lat (usec): min=15426, max=38648, avg=24670.64, stdev=3108.07 00:10:44.069 clat percentiles (usec): 00:10:44.069 | 1.00th=[16712], 5.00th=[19792], 10.00th=[21365], 20.00th=[22938], 00:10:44.069 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:10:44.069 | 70.00th=[24511], 80.00th=[26346], 90.00th=[28443], 95.00th=[30278], 00:10:44.069 | 99.00th=[33817], 99.50th=[34866], 99.90th=[38536], 99.95th=[38536], 00:10:44.069 | 99.99th=[38536] 00:10:44.069 write: IOPS=2858, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1002msec); 0 zone resets 00:10:44.069 slat (usec): min=5, max=13384, avg=170.83, stdev=801.13 00:10:44.069 clat (usec): min=1043, max=35315, avg=21968.49, stdev=4512.78 00:10:44.069 lat (usec): min=1065, max=35656, avg=22139.31, stdev=4516.38 00:10:44.069 clat percentiles (usec): 00:10:44.069 | 1.00th=[ 1811], 5.00th=[13829], 10.00th=[17433], 20.00th=[19792], 00:10:44.069 | 30.00th=[21890], 40.00th=[22414], 50.00th=[22938], 60.00th=[23200], 00:10:44.069 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25035], 95.00th=[27395], 00:10:44.069 | 99.00th=[32900], 99.50th=[33162], 99.90th=[34341], 99.95th=[34341], 00:10:44.069 | 99.99th=[35390] 00:10:44.069 bw ( KiB/s): min= 9808, max=12288, per=16.26%, avg=11048.00, stdev=1753.62, samples=2 00:10:44.069 iops : min= 2454, max= 3072, avg=2763.00, stdev=436.99, samples=2 00:10:44.069 lat (msec) : 2=0.63%, 4=0.15%, 10=1.07%, 20=11.76%, 50=86.39% 00:10:44.069 cpu : usr=3.20%, sys=7.49%, ctx=634, majf=0, minf=16 00:10:44.069 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:44.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.069 issued rwts: total=2560,2864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.069 job2: (groupid=0, jobs=1): err= 0: pid=68681: Mon Jul 15 16:59:34 2024 00:10:44.069 read: IOPS=5049, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1005msec) 00:10:44.069 slat (usec): min=7, max=9966, avg=97.53, stdev=590.08 00:10:44.069 clat (usec): min=1662, max=22618, avg=13031.15, stdev=1786.07 00:10:44.069 lat (usec): min=6549, max=25741, avg=13128.67, stdev=1802.54 00:10:44.069 clat percentiles (usec): 00:10:44.069 | 1.00th=[ 7504], 5.00th=[ 9634], 10.00th=[11994], 20.00th=[12518], 00:10:44.069 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13042], 00:10:44.069 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14091], 95.00th=[16712], 00:10:44.069 | 99.00th=[20055], 99.50th=[20579], 99.90th=[22414], 99.95th=[22414], 00:10:44.069 | 99.99th=[22676] 00:10:44.069 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:10:44.069 slat (usec): min=10, max=9221, avg=91.27, stdev=545.31 00:10:44.069 clat (usec): min=6253, max=17308, avg=11946.29, stdev=1266.40 00:10:44.069 lat (usec): min=6281, max=17397, avg=12037.56, stdev=1174.62 00:10:44.069 clat percentiles (usec): 00:10:44.069 | 1.00th=[ 6652], 5.00th=[10290], 10.00th=[10814], 20.00th=[11338], 00:10:44.069 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:10:44.069 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12911], 95.00th=[13042], 00:10:44.069 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:10:44.069 | 99.99th=[17433] 00:10:44.069 bw ( KiB/s): min=20480, max=20480, per=30.14%, avg=20480.00, stdev= 0.00, samples=2 00:10:44.069 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:44.069 lat (msec) : 2=0.01%, 10=4.80%, 20=94.63%, 50=0.56% 00:10:44.069 cpu : usr=4.68%, sys=13.65%, ctx=224, majf=0, minf=7 00:10:44.069 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:44.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.069 issued rwts: total=5075,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.069 job3: (groupid=0, jobs=1): err= 0: pid=68682: Mon Jul 15 16:59:34 2024 00:10:44.069 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:10:44.069 slat (usec): min=4, max=10521, avg=190.95, stdev=775.53 00:10:44.069 clat (usec): min=13562, max=35926, avg=24193.06, stdev=3085.69 00:10:44.069 lat (usec): min=13590, max=36634, avg=24384.01, stdev=3115.88 00:10:44.069 clat percentiles (usec): 00:10:44.069 | 1.00th=[15270], 5.00th=[19530], 10.00th=[21365], 20.00th=[22938], 00:10:44.069 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:10:44.069 | 70.00th=[24249], 80.00th=[25035], 90.00th=[27919], 95.00th=[30540], 00:10:44.069 | 99.00th=[33424], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:10:44.069 | 99.99th=[35914] 00:10:44.069 write: IOPS=2938, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1003msec); 0 zone resets 00:10:44.069 slat (usec): min=5, max=9620, avg=166.47, stdev=768.46 00:10:44.069 clat (usec): min=836, max=34685, avg=22061.16, stdev=3982.83 00:10:44.069 lat (usec): min=4073, max=35397, avg=22227.63, stdev=3974.43 00:10:44.069 clat percentiles (usec): 00:10:44.069 | 1.00th=[10159], 5.00th=[14222], 10.00th=[17171], 20.00th=[19006], 00:10:44.069 | 30.00th=[21365], 40.00th=[22414], 50.00th=[22938], 60.00th=[23200], 00:10:44.069 | 70.00th=[23725], 80.00th=[24511], 90.00th=[25822], 95.00th=[27657], 00:10:44.069 | 99.00th=[31327], 99.50th=[33162], 99.90th=[34866], 99.95th=[34866], 00:10:44.069 | 99.99th=[34866] 00:10:44.069 bw ( KiB/s): min=10264, max=12312, per=16.61%, avg=11288.00, stdev=1448.15, samples=2 00:10:44.069 iops : min= 2566, max= 3078, avg=2822.00, stdev=362.04, samples=2 00:10:44.069 lat (usec) : 1000=0.02% 00:10:44.069 lat (msec) : 10=0.44%, 20=14.84%, 50=84.71% 00:10:44.069 cpu : usr=2.79%, sys=7.78%, ctx=678, majf=0, minf=11 00:10:44.069 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:44.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.069 issued rwts: total=2560,2947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.069 00:10:44.069 Run status group 0 (all jobs): 00:10:44.069 READ: bw=61.8MiB/s (64.8MB/s), 9.97MiB/s-22.3MiB/s (10.5MB/s-23.4MB/s), io=62.1MiB (65.2MB), run=1001-1005msec 00:10:44.069 WRITE: bw=66.4MiB/s (69.6MB/s), 11.2MiB/s-24.0MiB/s (11.7MB/s-25.1MB/s), io=66.7MiB (69.9MB), run=1001-1005msec 00:10:44.069 00:10:44.069 Disk stats (read/write): 00:10:44.069 nvme0n1: ios=5109/5120, merge=0/0, ticks=26883/21817, in_queue=48700, util=88.47% 00:10:44.327 nvme0n2: ios=2089/2560, merge=0/0, ticks=24716/26611, in_queue=51327, util=87.56% 00:10:44.327 nvme0n3: ios=4096/4608, merge=0/0, ticks=50432/50580, in_queue=101012, util=89.18% 00:10:44.327 nvme0n4: ios=2142/2560, merge=0/0, ticks=25346/25897, in_queue=51243, util=89.32% 00:10:44.327 16:59:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:44.327 16:59:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68696 00:10:44.327 16:59:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:44.327 16:59:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:44.327 [global] 00:10:44.327 thread=1 00:10:44.327 invalidate=1 00:10:44.327 rw=read 00:10:44.327 time_based=1 00:10:44.327 runtime=10 00:10:44.327 ioengine=libaio 00:10:44.327 direct=1 00:10:44.327 bs=4096 00:10:44.327 iodepth=1 00:10:44.327 norandommap=1 00:10:44.327 numjobs=1 00:10:44.327 00:10:44.327 [job0] 00:10:44.327 filename=/dev/nvme0n1 00:10:44.327 [job1] 00:10:44.327 filename=/dev/nvme0n2 00:10:44.327 [job2] 00:10:44.327 filename=/dev/nvme0n3 00:10:44.327 [job3] 00:10:44.327 filename=/dev/nvme0n4 00:10:44.327 Could not set queue depth (nvme0n1) 00:10:44.327 Could not set queue depth (nvme0n2) 00:10:44.327 Could not set queue depth (nvme0n3) 00:10:44.327 Could not set queue depth (nvme0n4) 00:10:44.327 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.327 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.327 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.327 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.327 fio-3.35 00:10:44.327 Starting 4 threads 00:10:47.607 16:59:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:47.607 fio: pid=68739, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:47.607 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=43376640, buflen=4096 00:10:47.607 16:59:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:47.607 fio: pid=68738, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:47.607 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=46862336, buflen=4096 00:10:47.917 16:59:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.917 16:59:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:47.917 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=53567488, buflen=4096 00:10:47.917 fio: pid=68736, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:47.917 16:59:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.917 16:59:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:48.175 fio: pid=68737, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:48.175 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=60092416, buflen=4096 00:10:48.175 00:10:48.175 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68736: Mon Jul 15 16:59:38 2024 00:10:48.175 read: IOPS=3825, BW=14.9MiB/s (15.7MB/s)(51.1MiB/3419msec) 00:10:48.175 slat (usec): min=11, max=15810, avg=16.79, stdev=181.87 00:10:48.175 clat (usec): min=134, max=2936, avg=243.26, stdev=69.29 00:10:48.175 lat (usec): min=146, max=16016, avg=260.05, stdev=194.17 00:10:48.175 clat percentiles (usec): 00:10:48.175 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 231], 00:10:48.175 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:10:48.175 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:10:48.175 | 99.00th=[ 371], 99.50th=[ 416], 99.90th=[ 848], 99.95th=[ 1188], 00:10:48.175 | 99.99th=[ 2671] 00:10:48.175 bw ( KiB/s): min=14000, max=14752, per=26.89%, avg=14530.67, stdev=268.29, samples=6 00:10:48.175 iops : min= 3500, max= 3688, avg=3632.67, stdev=67.07, samples=6 00:10:48.175 lat (usec) : 250=47.75%, 500=51.95%, 750=0.15%, 1000=0.08% 00:10:48.175 lat (msec) : 2=0.04%, 4=0.03% 00:10:48.175 cpu : usr=1.11%, sys=4.36%, ctx=13086, majf=0, minf=1 00:10:48.175 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.175 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.175 issued rwts: total=13079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.175 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68737: Mon Jul 15 16:59:38 2024 00:10:48.175 read: IOPS=3981, BW=15.6MiB/s (16.3MB/s)(57.3MiB/3685msec) 00:10:48.175 slat (usec): min=11, max=14789, avg=18.64, stdev=215.36 00:10:48.175 clat (usec): min=126, max=2942, avg=231.24, stdev=61.57 00:10:48.175 lat (usec): min=140, max=15136, avg=249.88, stdev=224.45 00:10:48.175 clat percentiles (usec): 00:10:48.175 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 161], 00:10:48.175 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:10:48.175 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:10:48.175 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 603], 99.95th=[ 873], 00:10:48.175 | 99.99th=[ 2835] 00:10:48.175 bw ( KiB/s): min=14448, max=20097, per=28.95%, avg=15644.71, stdev=1987.32, samples=7 00:10:48.175 iops : min= 3612, max= 5024, avg=3911.14, stdev=496.74, samples=7 00:10:48.175 lat (usec) : 250=56.14%, 500=43.70%, 750=0.08%, 1000=0.03% 00:10:48.175 lat (msec) : 2=0.02%, 4=0.01% 00:10:48.175 cpu : usr=1.25%, sys=4.80%, ctx=14680, majf=0, minf=1 00:10:48.175 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.175 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.175 issued rwts: total=14672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.175 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68738: Mon Jul 15 16:59:38 2024 00:10:48.175 read: IOPS=3616, BW=14.1MiB/s (14.8MB/s)(44.7MiB/3164msec) 00:10:48.175 slat (usec): min=11, max=8676, avg=15.96, stdev=105.74 00:10:48.175 clat (usec): min=143, max=7340, avg=259.22, stdev=164.02 00:10:48.175 lat (usec): min=155, max=8970, avg=275.19, stdev=195.82 00:10:48.175 clat percentiles (usec): 00:10:48.175 | 1.00th=[ 165], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:10:48.175 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:10:48.175 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 281], 00:10:48.175 | 99.00th=[ 314], 99.50th=[ 371], 99.90th=[ 3687], 99.95th=[ 3982], 00:10:48.175 | 99.99th=[ 7242] 00:10:48.175 bw ( KiB/s): min=13912, max=14880, per=26.75%, avg=14457.33, stdev=374.12, samples=6 00:10:48.175 iops : min= 3478, max= 3720, avg=3614.33, stdev=93.53, samples=6 00:10:48.175 lat (usec) : 250=44.75%, 500=55.03%, 750=0.04%, 1000=0.02% 00:10:48.175 lat (msec) : 2=0.01%, 4=0.10%, 10=0.04% 00:10:48.175 cpu : usr=0.89%, sys=4.62%, ctx=11444, majf=0, minf=1 00:10:48.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.176 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.176 issued rwts: total=11442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.176 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68739: Mon Jul 15 16:59:38 2024 00:10:48.176 read: IOPS=3604, BW=14.1MiB/s (14.8MB/s)(41.4MiB/2938msec) 00:10:48.176 slat (nsec): min=11560, max=73974, avg=14985.99, stdev=5128.83 00:10:48.176 clat (usec): min=152, max=2109, avg=260.77, stdev=40.79 00:10:48.176 lat (usec): min=165, max=2131, avg=275.76, stdev=40.85 00:10:48.176 clat percentiles (usec): 00:10:48.176 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:10:48.176 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:10:48.176 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 306], 00:10:48.176 | 99.00th=[ 371], 99.50th=[ 404], 99.90th=[ 619], 99.95th=[ 652], 00:10:48.176 | 99.99th=[ 1909] 00:10:48.176 bw ( KiB/s): min=14040, max=14768, per=26.86%, avg=14512.00, stdev=281.54, samples=5 00:10:48.176 iops : min= 3510, max= 3692, avg=3628.00, stdev=70.38, samples=5 00:10:48.176 lat (usec) : 250=38.59%, 500=61.20%, 750=0.15%, 1000=0.01% 00:10:48.176 lat (msec) : 2=0.03%, 4=0.01% 00:10:48.176 cpu : usr=1.16%, sys=4.87%, ctx=10591, majf=0, minf=1 00:10:48.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.176 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.176 issued rwts: total=10591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.176 00:10:48.176 Run status group 0 (all jobs): 00:10:48.176 READ: bw=52.8MiB/s (55.3MB/s), 14.1MiB/s-15.6MiB/s (14.8MB/s-16.3MB/s), io=194MiB (204MB), run=2938-3685msec 00:10:48.176 00:10:48.176 Disk stats (read/write): 00:10:48.176 nvme0n1: ios=12793/0, merge=0/0, ticks=3175/0, in_queue=3175, util=95.36% 00:10:48.176 nvme0n2: ios=14262/0, merge=0/0, ticks=3382/0, in_queue=3382, util=95.29% 00:10:48.176 nvme0n3: ios=11276/0, merge=0/0, ticks=2921/0, in_queue=2921, util=95.65% 00:10:48.176 nvme0n4: ios=10359/0, merge=0/0, ticks=2716/0, in_queue=2716, util=96.76% 00:10:48.176 16:59:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.176 16:59:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:48.433 16:59:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.433 16:59:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:48.996 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.996 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:48.996 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.996 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:49.561 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.561 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:49.561 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:49.561 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68696 00:10:49.561 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:49.561 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.561 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.819 16:59:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:49.819 16:59:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.819 16:59:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:49.819 16:59:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.819 16:59:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:49.819 nvmf hotplug test: fio failed as expected 00:10:49.819 16:59:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:49.819 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:49.819 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:49.819 16:59:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:50.077 rmmod nvme_tcp 00:10:50.077 rmmod nvme_fabrics 00:10:50.077 rmmod nvme_keyring 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68315 ']' 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68315 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68315 ']' 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68315 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68315 00:10:50.077 killing process with pid 68315 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68315' 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68315 00:10:50.077 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68315 00:10:50.335 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:50.335 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:50.335 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:50.335 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:50.335 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:50.335 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.335 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.335 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.335 16:59:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:50.335 00:10:50.335 real 0m19.392s 00:10:50.335 user 1m13.643s 00:10:50.335 sys 0m9.911s 00:10:50.335 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.335 16:59:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.335 ************************************ 00:10:50.335 END TEST nvmf_fio_target 00:10:50.335 ************************************ 00:10:50.335 16:59:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:50.335 16:59:40 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:50.335 16:59:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:50.335 16:59:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.335 16:59:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.335 ************************************ 00:10:50.335 START TEST nvmf_bdevio 00:10:50.335 ************************************ 00:10:50.335 16:59:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:50.594 * Looking for test storage... 00:10:50.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.594 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:50.595 Cannot find device "nvmf_tgt_br" 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:50.595 Cannot find device "nvmf_tgt_br2" 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:50.595 Cannot find device "nvmf_tgt_br" 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:50.595 Cannot find device "nvmf_tgt_br2" 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:50.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:50.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:50.595 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:50.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:10:50.854 00:10:50.854 --- 10.0.0.2 ping statistics --- 00:10:50.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.854 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:50.854 16:59:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:50.854 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:50.854 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:10:50.854 00:10:50.854 --- 10.0.0.3 ping statistics --- 00:10:50.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.854 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:50.854 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:50.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:50.854 00:10:50.854 --- 10.0.0.1 ping statistics --- 00:10:50.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.854 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:50.854 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.854 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:50.854 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:50.854 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.854 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:50.854 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69008 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69008 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69008 ']' 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.855 16:59:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.855 [2024-07-15 16:59:41.094287] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:50.855 [2024-07-15 16:59:41.094435] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.113 [2024-07-15 16:59:41.234197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.113 [2024-07-15 16:59:41.352182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.113 [2024-07-15 16:59:41.352256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.113 [2024-07-15 16:59:41.352283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.113 [2024-07-15 16:59:41.352291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.113 [2024-07-15 16:59:41.352298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.113 [2024-07-15 16:59:41.352487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:51.113 [2024-07-15 16:59:41.352619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:51.113 [2024-07-15 16:59:41.352789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:51.113 [2024-07-15 16:59:41.352976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.113 [2024-07-15 16:59:41.405295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 [2024-07-15 16:59:42.078016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 Malloc0 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 [2024-07-15 16:59:42.153753] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:52.050 { 00:10:52.050 "params": { 00:10:52.050 "name": "Nvme$subsystem", 00:10:52.050 "trtype": "$TEST_TRANSPORT", 00:10:52.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:52.050 "adrfam": "ipv4", 00:10:52.050 "trsvcid": "$NVMF_PORT", 00:10:52.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:52.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:52.050 "hdgst": ${hdgst:-false}, 00:10:52.050 "ddgst": ${ddgst:-false} 00:10:52.050 }, 00:10:52.050 "method": "bdev_nvme_attach_controller" 00:10:52.050 } 00:10:52.050 EOF 00:10:52.050 )") 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:52.050 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:52.050 "params": { 00:10:52.050 "name": "Nvme1", 00:10:52.050 "trtype": "tcp", 00:10:52.050 "traddr": "10.0.0.2", 00:10:52.050 "adrfam": "ipv4", 00:10:52.050 "trsvcid": "4420", 00:10:52.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:52.050 "hdgst": false, 00:10:52.050 "ddgst": false 00:10:52.050 }, 00:10:52.050 "method": "bdev_nvme_attach_controller" 00:10:52.050 }' 00:10:52.050 [2024-07-15 16:59:42.210653] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:52.050 [2024-07-15 16:59:42.210756] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69044 ] 00:10:52.309 [2024-07-15 16:59:42.348624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:52.309 [2024-07-15 16:59:42.498060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.309 [2024-07-15 16:59:42.498205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.309 [2024-07-15 16:59:42.498201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.309 [2024-07-15 16:59:42.560573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:52.568 I/O targets: 00:10:52.568 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:52.568 00:10:52.568 00:10:52.568 CUnit - A unit testing framework for C - Version 2.1-3 00:10:52.568 http://cunit.sourceforge.net/ 00:10:52.568 00:10:52.568 00:10:52.568 Suite: bdevio tests on: Nvme1n1 00:10:52.568 Test: blockdev write read block ...passed 00:10:52.568 Test: blockdev write zeroes read block ...passed 00:10:52.568 Test: blockdev write zeroes read no split ...passed 00:10:52.568 Test: blockdev write zeroes read split ...passed 00:10:52.568 Test: blockdev write zeroes read split partial ...passed 00:10:52.568 Test: blockdev reset ...[2024-07-15 16:59:42.702364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:52.568 [2024-07-15 16:59:42.702650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8b7c0 (9): Bad file descriptor 00:10:52.568 [2024-07-15 16:59:42.720012] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:52.568 passed 00:10:52.568 Test: blockdev write read 8 blocks ...passed 00:10:52.568 Test: blockdev write read size > 128k ...passed 00:10:52.568 Test: blockdev write read invalid size ...passed 00:10:52.568 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:52.568 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:52.568 Test: blockdev write read max offset ...passed 00:10:52.568 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:52.568 Test: blockdev writev readv 8 blocks ...passed 00:10:52.568 Test: blockdev writev readv 30 x 1block ...passed 00:10:52.568 Test: blockdev writev readv block ...passed 00:10:52.568 Test: blockdev writev readv size > 128k ...passed 00:10:52.568 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:52.568 Test: blockdev comparev and writev ...[2024-07-15 16:59:42.728614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.568 [2024-07-15 16:59:42.728855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:52.568 [2024-07-15 16:59:42.729032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.568 [2024-07-15 16:59:42.729188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:52.568 [2024-07-15 16:59:42.729640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.568 [2024-07-15 16:59:42.729795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:52.568 [2024-07-15 16:59:42.730030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.568 [2024-07-15 16:59:42.730051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:52.568 [2024-07-15 16:59:42.730330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.568 [2024-07-15 16:59:42.730365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:52.569 [2024-07-15 16:59:42.730385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.569 [2024-07-15 16:59:42.730396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:52.569 [2024-07-15 16:59:42.730688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.569 [2024-07-15 16:59:42.730719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:52.569 [2024-07-15 16:59:42.730738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.569 [2024-07-15 16:59:42.730748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:52.569 passed 00:10:52.569 Test: blockdev nvme passthru rw ...passed 00:10:52.569 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:59:42.731786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.569 [2024-07-15 16:59:42.731813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:52.569 [2024-07-15 16:59:42.731924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.569 [2024-07-15 16:59:42.731949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:52.569 [2024-07-15 16:59:42.732064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.569 [2024-07-15 16:59:42.732092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:52.569 [2024-07-15 16:59:42.732202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.569 passed 00:10:52.569 Test: blockdev nvme admin passthru ...[2024-07-15 16:59:42.732226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:52.569 passed 00:10:52.569 Test: blockdev copy ...passed 00:10:52.569 00:10:52.569 Run Summary: Type Total Ran Passed Failed Inactive 00:10:52.569 suites 1 1 n/a 0 0 00:10:52.569 tests 23 23 23 0 0 00:10:52.569 asserts 152 152 152 0 n/a 00:10:52.569 00:10:52.569 Elapsed time = 0.144 seconds 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.828 16:59:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.828 rmmod nvme_tcp 00:10:52.828 rmmod nvme_fabrics 00:10:52.828 rmmod nvme_keyring 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69008 ']' 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69008 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69008 ']' 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69008 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69008 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:52.828 killing process with pid 69008 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69008' 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69008 00:10:52.828 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69008 00:10:53.087 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:53.087 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:53.087 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:53.087 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.087 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:53.087 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.087 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.087 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.087 16:59:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:53.087 00:10:53.087 real 0m2.792s 00:10:53.087 user 0m9.131s 00:10:53.087 sys 0m0.752s 00:10:53.087 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.087 16:59:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.087 ************************************ 00:10:53.087 END TEST nvmf_bdevio 00:10:53.087 ************************************ 00:10:53.351 16:59:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:53.351 16:59:43 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:53.351 16:59:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:53.351 16:59:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.351 16:59:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:53.351 ************************************ 00:10:53.351 START TEST nvmf_auth_target 00:10:53.351 ************************************ 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:53.351 * Looking for test storage... 00:10:53.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:53.351 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:53.352 Cannot find device "nvmf_tgt_br" 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:53.352 Cannot find device "nvmf_tgt_br2" 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:53.352 Cannot find device "nvmf_tgt_br" 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:53.352 Cannot find device "nvmf_tgt_br2" 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:53.352 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:53.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:53.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:53.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:10:53.611 00:10:53.611 --- 10.0.0.2 ping statistics --- 00:10:53.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.611 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:53.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:53.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:10:53.611 00:10:53.611 --- 10.0.0.3 ping statistics --- 00:10:53.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.611 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:53.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:53.611 00:10:53.611 --- 10.0.0.1 ping statistics --- 00:10:53.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.611 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69218 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69218 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69218 ']' 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:53.611 16:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69250 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e09ff92327f8dd866d11576b5819622ab30e0e91ce9ad895 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BUO 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e09ff92327f8dd866d11576b5819622ab30e0e91ce9ad895 0 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e09ff92327f8dd866d11576b5819622ab30e0e91ce9ad895 0 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e09ff92327f8dd866d11576b5819622ab30e0e91ce9ad895 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BUO 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BUO 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.BUO 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=553863ed5a7e2572b2b69748de009b281d3ecdd0c0c2cbd2a94eca4114200c4b 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.nvz 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 553863ed5a7e2572b2b69748de009b281d3ecdd0c0c2cbd2a94eca4114200c4b 3 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 553863ed5a7e2572b2b69748de009b281d3ecdd0c0c2cbd2a94eca4114200c4b 3 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=553863ed5a7e2572b2b69748de009b281d3ecdd0c0c2cbd2a94eca4114200c4b 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:54.987 16:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.nvz 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.nvz 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.nvz 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=265fbadb03178f24974ab34341141ff7 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5bY 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 265fbadb03178f24974ab34341141ff7 1 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 265fbadb03178f24974ab34341141ff7 1 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=265fbadb03178f24974ab34341141ff7 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5bY 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5bY 00:10:54.987 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.5bY 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9f5eb92804ed901e559cf394ec0f0def6d0a8c293d1ebb16 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.omZ 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9f5eb92804ed901e559cf394ec0f0def6d0a8c293d1ebb16 2 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9f5eb92804ed901e559cf394ec0f0def6d0a8c293d1ebb16 2 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9f5eb92804ed901e559cf394ec0f0def6d0a8c293d1ebb16 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.omZ 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.omZ 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.omZ 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=061a523411d8467f71fdb90cfecd2069f79d463345ee7fdb 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sOE 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 061a523411d8467f71fdb90cfecd2069f79d463345ee7fdb 2 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 061a523411d8467f71fdb90cfecd2069f79d463345ee7fdb 2 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=061a523411d8467f71fdb90cfecd2069f79d463345ee7fdb 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sOE 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sOE 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.sOE 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=81289c57c57d0a2e8fe4e7c2c919c80b 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jND 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 81289c57c57d0a2e8fe4e7c2c919c80b 1 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 81289c57c57d0a2e8fe4e7c2c919c80b 1 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=81289c57c57d0a2e8fe4e7c2c919c80b 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jND 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jND 00:10:54.988 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.jND 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bc745fdfbf11e7a9d0b3237b39245a8accebb0f062ed540ed28bc248afa74c24 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yj4 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bc745fdfbf11e7a9d0b3237b39245a8accebb0f062ed540ed28bc248afa74c24 3 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bc745fdfbf11e7a9d0b3237b39245a8accebb0f062ed540ed28bc248afa74c24 3 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bc745fdfbf11e7a9d0b3237b39245a8accebb0f062ed540ed28bc248afa74c24 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yj4 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yj4 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.yj4 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69218 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69218 ']' 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.247 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.507 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.507 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:55.507 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69250 /var/tmp/host.sock 00:10:55.507 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69250 ']' 00:10:55.507 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:55.507 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:55.507 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:55.507 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.507 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BUO 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.BUO 00:10:55.765 16:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.BUO 00:10:56.023 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.nvz ]] 00:10:56.023 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nvz 00:10:56.023 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.023 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.023 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.023 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nvz 00:10:56.023 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nvz 00:10:56.280 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:56.280 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5bY 00:10:56.280 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.280 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.280 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.280 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.5bY 00:10:56.280 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.5bY 00:10:56.537 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.omZ ]] 00:10:56.537 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.omZ 00:10:56.537 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.537 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.537 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.537 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.omZ 00:10:56.537 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.omZ 00:10:56.805 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:56.805 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.sOE 00:10:56.805 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.805 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.805 16:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.805 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.sOE 00:10:56.805 16:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.sOE 00:10:57.063 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.jND ]] 00:10:57.063 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jND 00:10:57.063 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.063 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.063 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.063 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jND 00:10:57.063 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jND 00:10:57.322 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:57.322 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.yj4 00:10:57.322 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.322 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.322 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.322 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.yj4 00:10:57.322 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.yj4 00:10:57.580 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:57.580 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:57.580 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:57.580 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.580 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:57.580 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.838 16:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:58.096 00:10:58.096 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.096 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.096 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.354 { 00:10:58.354 "cntlid": 1, 00:10:58.354 "qid": 0, 00:10:58.354 "state": "enabled", 00:10:58.354 "thread": "nvmf_tgt_poll_group_000", 00:10:58.354 "listen_address": { 00:10:58.354 "trtype": "TCP", 00:10:58.354 "adrfam": "IPv4", 00:10:58.354 "traddr": "10.0.0.2", 00:10:58.354 "trsvcid": "4420" 00:10:58.354 }, 00:10:58.354 "peer_address": { 00:10:58.354 "trtype": "TCP", 00:10:58.354 "adrfam": "IPv4", 00:10:58.354 "traddr": "10.0.0.1", 00:10:58.354 "trsvcid": "38804" 00:10:58.354 }, 00:10:58.354 "auth": { 00:10:58.354 "state": "completed", 00:10:58.354 "digest": "sha256", 00:10:58.354 "dhgroup": "null" 00:10:58.354 } 00:10:58.354 } 00:10:58.354 ]' 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.354 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.612 16:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.922 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.922 16:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.922 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.922 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.922 16:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.922 16:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.922 16:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.922 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.922 { 00:11:03.922 "cntlid": 3, 00:11:03.922 "qid": 0, 00:11:03.922 "state": "enabled", 00:11:03.922 "thread": "nvmf_tgt_poll_group_000", 00:11:03.922 "listen_address": { 00:11:03.922 "trtype": "TCP", 00:11:03.922 "adrfam": "IPv4", 00:11:03.922 "traddr": "10.0.0.2", 00:11:03.922 "trsvcid": "4420" 00:11:03.922 }, 00:11:03.922 "peer_address": { 00:11:03.922 "trtype": "TCP", 00:11:03.922 "adrfam": "IPv4", 00:11:03.922 "traddr": "10.0.0.1", 00:11:03.922 "trsvcid": "38780" 00:11:03.922 }, 00:11:03.922 "auth": { 00:11:03.922 "state": "completed", 00:11:03.922 "digest": "sha256", 00:11:03.922 "dhgroup": "null" 00:11:03.922 } 00:11:03.922 } 00:11:03.922 ]' 00:11:03.922 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.180 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.180 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.180 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:04.180 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.180 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.180 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.180 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.442 16:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:11:05.021 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.021 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:05.021 16:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.021 16:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.021 16:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.021 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.021 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:05.021 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.585 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.585 00:11:05.842 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.842 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.842 16:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.842 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.842 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.842 16:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.842 16:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.842 16:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.842 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.842 { 00:11:05.842 "cntlid": 5, 00:11:05.842 "qid": 0, 00:11:05.842 "state": "enabled", 00:11:05.842 "thread": "nvmf_tgt_poll_group_000", 00:11:05.842 "listen_address": { 00:11:05.842 "trtype": "TCP", 00:11:05.842 "adrfam": "IPv4", 00:11:05.842 "traddr": "10.0.0.2", 00:11:05.842 "trsvcid": "4420" 00:11:05.842 }, 00:11:05.842 "peer_address": { 00:11:05.842 "trtype": "TCP", 00:11:05.842 "adrfam": "IPv4", 00:11:05.842 "traddr": "10.0.0.1", 00:11:05.842 "trsvcid": "38810" 00:11:05.842 }, 00:11:05.842 "auth": { 00:11:05.842 "state": "completed", 00:11:05.842 "digest": "sha256", 00:11:05.842 "dhgroup": "null" 00:11:05.842 } 00:11:05.842 } 00:11:05.842 ]' 00:11:05.842 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:06.100 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.100 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:06.100 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:06.100 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:06.100 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.100 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.100 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.358 16:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:11:06.924 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.924 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:06.924 16:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.924 16:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.924 16:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.924 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.924 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:06.924 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:07.181 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:07.446 00:11:07.730 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.730 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.730 16:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.730 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.730 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.730 16:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.730 16:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.988 16:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.988 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.988 { 00:11:07.988 "cntlid": 7, 00:11:07.988 "qid": 0, 00:11:07.988 "state": "enabled", 00:11:07.988 "thread": "nvmf_tgt_poll_group_000", 00:11:07.988 "listen_address": { 00:11:07.988 "trtype": "TCP", 00:11:07.988 "adrfam": "IPv4", 00:11:07.988 "traddr": "10.0.0.2", 00:11:07.988 "trsvcid": "4420" 00:11:07.988 }, 00:11:07.988 "peer_address": { 00:11:07.988 "trtype": "TCP", 00:11:07.988 "adrfam": "IPv4", 00:11:07.988 "traddr": "10.0.0.1", 00:11:07.988 "trsvcid": "38838" 00:11:07.988 }, 00:11:07.988 "auth": { 00:11:07.988 "state": "completed", 00:11:07.988 "digest": "sha256", 00:11:07.988 "dhgroup": "null" 00:11:07.988 } 00:11:07.988 } 00:11:07.988 ]' 00:11:07.988 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.988 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.988 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.988 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:07.988 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.988 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.988 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.989 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.247 16:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:11:08.815 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.815 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:08.815 16:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.815 16:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.815 16:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.815 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:08.815 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.815 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:08.815 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.399 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.399 00:11:09.658 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.658 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.658 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.658 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.658 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.658 16:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.658 16:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.658 16:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.658 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.658 { 00:11:09.658 "cntlid": 9, 00:11:09.658 "qid": 0, 00:11:09.658 "state": "enabled", 00:11:09.658 "thread": "nvmf_tgt_poll_group_000", 00:11:09.658 "listen_address": { 00:11:09.658 "trtype": "TCP", 00:11:09.658 "adrfam": "IPv4", 00:11:09.658 "traddr": "10.0.0.2", 00:11:09.658 "trsvcid": "4420" 00:11:09.658 }, 00:11:09.658 "peer_address": { 00:11:09.658 "trtype": "TCP", 00:11:09.658 "adrfam": "IPv4", 00:11:09.658 "traddr": "10.0.0.1", 00:11:09.658 "trsvcid": "44316" 00:11:09.658 }, 00:11:09.658 "auth": { 00:11:09.658 "state": "completed", 00:11:09.658 "digest": "sha256", 00:11:09.658 "dhgroup": "ffdhe2048" 00:11:09.658 } 00:11:09.658 } 00:11:09.658 ]' 00:11:09.658 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.917 16:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.917 17:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.917 17:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:09.917 17:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.917 17:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.917 17:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.917 17:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.176 17:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.112 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.674 00:11:11.674 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.674 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.674 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.674 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.674 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.674 17:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.674 17:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.930 17:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.930 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.930 { 00:11:11.930 "cntlid": 11, 00:11:11.930 "qid": 0, 00:11:11.930 "state": "enabled", 00:11:11.930 "thread": "nvmf_tgt_poll_group_000", 00:11:11.930 "listen_address": { 00:11:11.930 "trtype": "TCP", 00:11:11.930 "adrfam": "IPv4", 00:11:11.930 "traddr": "10.0.0.2", 00:11:11.930 "trsvcid": "4420" 00:11:11.930 }, 00:11:11.930 "peer_address": { 00:11:11.930 "trtype": "TCP", 00:11:11.930 "adrfam": "IPv4", 00:11:11.930 "traddr": "10.0.0.1", 00:11:11.930 "trsvcid": "44338" 00:11:11.930 }, 00:11:11.930 "auth": { 00:11:11.930 "state": "completed", 00:11:11.930 "digest": "sha256", 00:11:11.930 "dhgroup": "ffdhe2048" 00:11:11.930 } 00:11:11.930 } 00:11:11.930 ]' 00:11:11.930 17:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.930 17:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.930 17:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.930 17:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:11.930 17:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.930 17:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.930 17:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.930 17:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.186 17:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.130 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.393 00:11:13.393 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.393 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.393 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.735 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.735 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.735 17:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.735 17:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.735 17:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.735 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.735 { 00:11:13.735 "cntlid": 13, 00:11:13.735 "qid": 0, 00:11:13.735 "state": "enabled", 00:11:13.735 "thread": "nvmf_tgt_poll_group_000", 00:11:13.735 "listen_address": { 00:11:13.735 "trtype": "TCP", 00:11:13.735 "adrfam": "IPv4", 00:11:13.735 "traddr": "10.0.0.2", 00:11:13.735 "trsvcid": "4420" 00:11:13.735 }, 00:11:13.735 "peer_address": { 00:11:13.735 "trtype": "TCP", 00:11:13.735 "adrfam": "IPv4", 00:11:13.735 "traddr": "10.0.0.1", 00:11:13.735 "trsvcid": "44372" 00:11:13.735 }, 00:11:13.735 "auth": { 00:11:13.735 "state": "completed", 00:11:13.735 "digest": "sha256", 00:11:13.735 "dhgroup": "ffdhe2048" 00:11:13.735 } 00:11:13.735 } 00:11:13.735 ]' 00:11:13.735 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.009 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.009 17:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.009 17:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:14.009 17:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.009 17:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.009 17:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.009 17:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.266 17:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:15.199 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:15.769 00:11:15.769 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.769 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.769 17:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.028 { 00:11:16.028 "cntlid": 15, 00:11:16.028 "qid": 0, 00:11:16.028 "state": "enabled", 00:11:16.028 "thread": "nvmf_tgt_poll_group_000", 00:11:16.028 "listen_address": { 00:11:16.028 "trtype": "TCP", 00:11:16.028 "adrfam": "IPv4", 00:11:16.028 "traddr": "10.0.0.2", 00:11:16.028 "trsvcid": "4420" 00:11:16.028 }, 00:11:16.028 "peer_address": { 00:11:16.028 "trtype": "TCP", 00:11:16.028 "adrfam": "IPv4", 00:11:16.028 "traddr": "10.0.0.1", 00:11:16.028 "trsvcid": "44392" 00:11:16.028 }, 00:11:16.028 "auth": { 00:11:16.028 "state": "completed", 00:11:16.028 "digest": "sha256", 00:11:16.028 "dhgroup": "ffdhe2048" 00:11:16.028 } 00:11:16.028 } 00:11:16.028 ]' 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.028 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.286 17:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:11:16.852 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.110 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:17.110 17:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.110 17:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.110 17:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.110 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:17.110 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.110 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:17.110 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:17.368 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:17.368 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.368 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:17.368 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:17.369 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:17.369 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.369 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.369 17:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.369 17:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.369 17:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.369 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.369 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.628 00:11:17.628 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.628 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.628 17:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.885 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.885 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.885 17:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.885 17:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.885 17:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.885 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.885 { 00:11:17.885 "cntlid": 17, 00:11:17.885 "qid": 0, 00:11:17.885 "state": "enabled", 00:11:17.885 "thread": "nvmf_tgt_poll_group_000", 00:11:17.885 "listen_address": { 00:11:17.885 "trtype": "TCP", 00:11:17.885 "adrfam": "IPv4", 00:11:17.885 "traddr": "10.0.0.2", 00:11:17.885 "trsvcid": "4420" 00:11:17.885 }, 00:11:17.885 "peer_address": { 00:11:17.885 "trtype": "TCP", 00:11:17.885 "adrfam": "IPv4", 00:11:17.885 "traddr": "10.0.0.1", 00:11:17.885 "trsvcid": "44416" 00:11:17.885 }, 00:11:17.885 "auth": { 00:11:17.886 "state": "completed", 00:11:17.886 "digest": "sha256", 00:11:17.886 "dhgroup": "ffdhe3072" 00:11:17.886 } 00:11:17.886 } 00:11:17.886 ]' 00:11:17.886 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.886 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.886 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.886 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:17.886 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.143 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.143 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.143 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.400 17:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:11:18.964 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.964 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:18.964 17:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.964 17:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.964 17:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.964 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.964 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:18.964 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:19.222 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:19.222 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:19.222 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:19.222 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:19.223 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:19.223 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.223 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.223 17:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.223 17:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.223 17:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.223 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.223 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.793 00:11:19.793 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.793 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.793 17:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.052 { 00:11:20.052 "cntlid": 19, 00:11:20.052 "qid": 0, 00:11:20.052 "state": "enabled", 00:11:20.052 "thread": "nvmf_tgt_poll_group_000", 00:11:20.052 "listen_address": { 00:11:20.052 "trtype": "TCP", 00:11:20.052 "adrfam": "IPv4", 00:11:20.052 "traddr": "10.0.0.2", 00:11:20.052 "trsvcid": "4420" 00:11:20.052 }, 00:11:20.052 "peer_address": { 00:11:20.052 "trtype": "TCP", 00:11:20.052 "adrfam": "IPv4", 00:11:20.052 "traddr": "10.0.0.1", 00:11:20.052 "trsvcid": "52660" 00:11:20.052 }, 00:11:20.052 "auth": { 00:11:20.052 "state": "completed", 00:11:20.052 "digest": "sha256", 00:11:20.052 "dhgroup": "ffdhe3072" 00:11:20.052 } 00:11:20.052 } 00:11:20.052 ]' 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.052 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.316 17:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:11:20.881 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.881 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:20.881 17:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.881 17:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.139 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.705 00:11:21.705 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.705 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.705 17:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.964 { 00:11:21.964 "cntlid": 21, 00:11:21.964 "qid": 0, 00:11:21.964 "state": "enabled", 00:11:21.964 "thread": "nvmf_tgt_poll_group_000", 00:11:21.964 "listen_address": { 00:11:21.964 "trtype": "TCP", 00:11:21.964 "adrfam": "IPv4", 00:11:21.964 "traddr": "10.0.0.2", 00:11:21.964 "trsvcid": "4420" 00:11:21.964 }, 00:11:21.964 "peer_address": { 00:11:21.964 "trtype": "TCP", 00:11:21.964 "adrfam": "IPv4", 00:11:21.964 "traddr": "10.0.0.1", 00:11:21.964 "trsvcid": "52704" 00:11:21.964 }, 00:11:21.964 "auth": { 00:11:21.964 "state": "completed", 00:11:21.964 "digest": "sha256", 00:11:21.964 "dhgroup": "ffdhe3072" 00:11:21.964 } 00:11:21.964 } 00:11:21.964 ]' 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.964 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.222 17:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:11:23.154 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.154 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:23.154 17:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.154 17:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.154 17:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.154 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.154 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:23.154 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:23.412 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:23.671 00:11:23.671 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.671 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.671 17:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.929 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.929 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.929 17:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.929 17:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.929 17:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.929 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.929 { 00:11:23.929 "cntlid": 23, 00:11:23.929 "qid": 0, 00:11:23.929 "state": "enabled", 00:11:23.929 "thread": "nvmf_tgt_poll_group_000", 00:11:23.929 "listen_address": { 00:11:23.929 "trtype": "TCP", 00:11:23.929 "adrfam": "IPv4", 00:11:23.929 "traddr": "10.0.0.2", 00:11:23.929 "trsvcid": "4420" 00:11:23.929 }, 00:11:23.929 "peer_address": { 00:11:23.929 "trtype": "TCP", 00:11:23.929 "adrfam": "IPv4", 00:11:23.929 "traddr": "10.0.0.1", 00:11:23.929 "trsvcid": "52724" 00:11:23.929 }, 00:11:23.929 "auth": { 00:11:23.929 "state": "completed", 00:11:23.929 "digest": "sha256", 00:11:23.929 "dhgroup": "ffdhe3072" 00:11:23.929 } 00:11:23.929 } 00:11:23.929 ]' 00:11:23.929 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.929 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.929 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.929 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:23.929 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.188 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.188 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.188 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.446 17:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:11:25.013 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.013 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:25.013 17:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.013 17:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.013 17:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.013 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:25.013 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.013 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:25.013 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.272 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.842 00:11:25.842 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.842 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.842 17:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.842 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.842 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.842 17:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.842 17:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.842 17:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.842 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.842 { 00:11:25.842 "cntlid": 25, 00:11:25.842 "qid": 0, 00:11:25.842 "state": "enabled", 00:11:25.842 "thread": "nvmf_tgt_poll_group_000", 00:11:25.842 "listen_address": { 00:11:25.842 "trtype": "TCP", 00:11:25.842 "adrfam": "IPv4", 00:11:25.842 "traddr": "10.0.0.2", 00:11:25.842 "trsvcid": "4420" 00:11:25.842 }, 00:11:25.842 "peer_address": { 00:11:25.842 "trtype": "TCP", 00:11:25.842 "adrfam": "IPv4", 00:11:25.842 "traddr": "10.0.0.1", 00:11:25.842 "trsvcid": "52752" 00:11:25.842 }, 00:11:25.842 "auth": { 00:11:25.842 "state": "completed", 00:11:25.842 "digest": "sha256", 00:11:25.842 "dhgroup": "ffdhe4096" 00:11:25.842 } 00:11:25.842 } 00:11:25.842 ]' 00:11:25.842 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.100 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.100 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.100 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:26.100 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.100 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.100 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.100 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.358 17:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.293 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.857 00:11:27.857 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.857 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.857 17:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.114 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.114 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.114 17:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.115 17:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.115 17:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.115 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.115 { 00:11:28.115 "cntlid": 27, 00:11:28.115 "qid": 0, 00:11:28.115 "state": "enabled", 00:11:28.115 "thread": "nvmf_tgt_poll_group_000", 00:11:28.115 "listen_address": { 00:11:28.115 "trtype": "TCP", 00:11:28.115 "adrfam": "IPv4", 00:11:28.115 "traddr": "10.0.0.2", 00:11:28.115 "trsvcid": "4420" 00:11:28.115 }, 00:11:28.115 "peer_address": { 00:11:28.115 "trtype": "TCP", 00:11:28.115 "adrfam": "IPv4", 00:11:28.115 "traddr": "10.0.0.1", 00:11:28.115 "trsvcid": "52782" 00:11:28.115 }, 00:11:28.115 "auth": { 00:11:28.115 "state": "completed", 00:11:28.115 "digest": "sha256", 00:11:28.115 "dhgroup": "ffdhe4096" 00:11:28.115 } 00:11:28.115 } 00:11:28.115 ]' 00:11:28.115 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.115 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.115 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.115 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:28.115 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.115 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.115 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.115 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.372 17:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:11:28.959 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.959 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:28.959 17:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.959 17:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.216 17:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.216 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.216 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:29.216 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.472 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.729 00:11:29.730 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.730 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.730 17:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.986 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.986 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.986 17:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.986 17:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.986 17:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.986 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.986 { 00:11:29.986 "cntlid": 29, 00:11:29.986 "qid": 0, 00:11:29.986 "state": "enabled", 00:11:29.986 "thread": "nvmf_tgt_poll_group_000", 00:11:29.986 "listen_address": { 00:11:29.986 "trtype": "TCP", 00:11:29.986 "adrfam": "IPv4", 00:11:29.986 "traddr": "10.0.0.2", 00:11:29.986 "trsvcid": "4420" 00:11:29.986 }, 00:11:29.986 "peer_address": { 00:11:29.986 "trtype": "TCP", 00:11:29.986 "adrfam": "IPv4", 00:11:29.986 "traddr": "10.0.0.1", 00:11:29.986 "trsvcid": "47326" 00:11:29.986 }, 00:11:29.986 "auth": { 00:11:29.986 "state": "completed", 00:11:29.986 "digest": "sha256", 00:11:29.986 "dhgroup": "ffdhe4096" 00:11:29.986 } 00:11:29.986 } 00:11:29.986 ]' 00:11:29.986 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.247 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.247 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.247 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:30.247 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.247 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.247 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.247 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.503 17:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.443 17:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.701 17:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.701 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:31.701 17:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:31.958 00:11:31.958 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.958 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.958 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.217 { 00:11:32.217 "cntlid": 31, 00:11:32.217 "qid": 0, 00:11:32.217 "state": "enabled", 00:11:32.217 "thread": "nvmf_tgt_poll_group_000", 00:11:32.217 "listen_address": { 00:11:32.217 "trtype": "TCP", 00:11:32.217 "adrfam": "IPv4", 00:11:32.217 "traddr": "10.0.0.2", 00:11:32.217 "trsvcid": "4420" 00:11:32.217 }, 00:11:32.217 "peer_address": { 00:11:32.217 "trtype": "TCP", 00:11:32.217 "adrfam": "IPv4", 00:11:32.217 "traddr": "10.0.0.1", 00:11:32.217 "trsvcid": "47338" 00:11:32.217 }, 00:11:32.217 "auth": { 00:11:32.217 "state": "completed", 00:11:32.217 "digest": "sha256", 00:11:32.217 "dhgroup": "ffdhe4096" 00:11:32.217 } 00:11:32.217 } 00:11:32.217 ]' 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.217 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.781 17:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:11:33.346 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.346 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:33.346 17:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.346 17:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.346 17:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.346 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:33.346 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.346 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:33.346 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.604 17:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.170 00:11:34.170 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.170 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.170 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.428 { 00:11:34.428 "cntlid": 33, 00:11:34.428 "qid": 0, 00:11:34.428 "state": "enabled", 00:11:34.428 "thread": "nvmf_tgt_poll_group_000", 00:11:34.428 "listen_address": { 00:11:34.428 "trtype": "TCP", 00:11:34.428 "adrfam": "IPv4", 00:11:34.428 "traddr": "10.0.0.2", 00:11:34.428 "trsvcid": "4420" 00:11:34.428 }, 00:11:34.428 "peer_address": { 00:11:34.428 "trtype": "TCP", 00:11:34.428 "adrfam": "IPv4", 00:11:34.428 "traddr": "10.0.0.1", 00:11:34.428 "trsvcid": "47352" 00:11:34.428 }, 00:11:34.428 "auth": { 00:11:34.428 "state": "completed", 00:11:34.428 "digest": "sha256", 00:11:34.428 "dhgroup": "ffdhe6144" 00:11:34.428 } 00:11:34.428 } 00:11:34.428 ]' 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.428 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.993 17:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:11:35.556 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.556 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:35.556 17:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.556 17:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.556 17:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.556 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.556 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:35.556 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.814 17:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.136 00:11:36.136 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.136 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.136 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.409 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.409 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.409 17:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.409 17:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.409 17:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.409 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.409 { 00:11:36.409 "cntlid": 35, 00:11:36.409 "qid": 0, 00:11:36.409 "state": "enabled", 00:11:36.409 "thread": "nvmf_tgt_poll_group_000", 00:11:36.409 "listen_address": { 00:11:36.409 "trtype": "TCP", 00:11:36.409 "adrfam": "IPv4", 00:11:36.409 "traddr": "10.0.0.2", 00:11:36.409 "trsvcid": "4420" 00:11:36.409 }, 00:11:36.409 "peer_address": { 00:11:36.409 "trtype": "TCP", 00:11:36.409 "adrfam": "IPv4", 00:11:36.409 "traddr": "10.0.0.1", 00:11:36.409 "trsvcid": "47374" 00:11:36.409 }, 00:11:36.409 "auth": { 00:11:36.409 "state": "completed", 00:11:36.409 "digest": "sha256", 00:11:36.409 "dhgroup": "ffdhe6144" 00:11:36.409 } 00:11:36.409 } 00:11:36.409 ]' 00:11:36.409 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.409 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:36.409 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:36.409 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:36.409 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:36.669 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.669 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.669 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.928 17:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:11:37.495 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.495 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:37.495 17:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.495 17:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.495 17:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.495 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.495 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:37.495 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.753 17:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.318 00:11:38.318 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.318 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.318 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.318 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.319 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.319 17:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.319 17:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.319 17:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.319 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.319 { 00:11:38.319 "cntlid": 37, 00:11:38.319 "qid": 0, 00:11:38.319 "state": "enabled", 00:11:38.319 "thread": "nvmf_tgt_poll_group_000", 00:11:38.319 "listen_address": { 00:11:38.319 "trtype": "TCP", 00:11:38.319 "adrfam": "IPv4", 00:11:38.319 "traddr": "10.0.0.2", 00:11:38.319 "trsvcid": "4420" 00:11:38.319 }, 00:11:38.319 "peer_address": { 00:11:38.319 "trtype": "TCP", 00:11:38.319 "adrfam": "IPv4", 00:11:38.319 "traddr": "10.0.0.1", 00:11:38.319 "trsvcid": "47390" 00:11:38.319 }, 00:11:38.319 "auth": { 00:11:38.319 "state": "completed", 00:11:38.319 "digest": "sha256", 00:11:38.319 "dhgroup": "ffdhe6144" 00:11:38.319 } 00:11:38.319 } 00:11:38.319 ]' 00:11:38.319 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.576 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.576 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.576 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:38.576 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.576 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.576 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.576 17:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.834 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:11:39.399 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.399 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:39.399 17:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.399 17:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.399 17:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.399 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:39.399 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:39.399 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:39.657 17:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.227 00:11:40.227 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.227 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.227 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.544 { 00:11:40.544 "cntlid": 39, 00:11:40.544 "qid": 0, 00:11:40.544 "state": "enabled", 00:11:40.544 "thread": "nvmf_tgt_poll_group_000", 00:11:40.544 "listen_address": { 00:11:40.544 "trtype": "TCP", 00:11:40.544 "adrfam": "IPv4", 00:11:40.544 "traddr": "10.0.0.2", 00:11:40.544 "trsvcid": "4420" 00:11:40.544 }, 00:11:40.544 "peer_address": { 00:11:40.544 "trtype": "TCP", 00:11:40.544 "adrfam": "IPv4", 00:11:40.544 "traddr": "10.0.0.1", 00:11:40.544 "trsvcid": "40106" 00:11:40.544 }, 00:11:40.544 "auth": { 00:11:40.544 "state": "completed", 00:11:40.544 "digest": "sha256", 00:11:40.544 "dhgroup": "ffdhe6144" 00:11:40.544 } 00:11:40.544 } 00:11:40.544 ]' 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.544 17:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.802 17:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:11:41.368 17:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.368 17:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:41.368 17:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.368 17:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.368 17:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.368 17:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:41.368 17:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.368 17:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:41.368 17:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.936 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.503 00:11:42.503 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.503 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.503 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.762 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.762 17:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.762 17:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.762 17:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.762 17:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.762 17:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.762 { 00:11:42.762 "cntlid": 41, 00:11:42.762 "qid": 0, 00:11:42.762 "state": "enabled", 00:11:42.762 "thread": "nvmf_tgt_poll_group_000", 00:11:42.762 "listen_address": { 00:11:42.762 "trtype": "TCP", 00:11:42.762 "adrfam": "IPv4", 00:11:42.762 "traddr": "10.0.0.2", 00:11:42.762 "trsvcid": "4420" 00:11:42.762 }, 00:11:42.762 "peer_address": { 00:11:42.762 "trtype": "TCP", 00:11:42.762 "adrfam": "IPv4", 00:11:42.762 "traddr": "10.0.0.1", 00:11:42.762 "trsvcid": "40126" 00:11:42.762 }, 00:11:42.762 "auth": { 00:11:42.762 "state": "completed", 00:11:42.762 "digest": "sha256", 00:11:42.762 "dhgroup": "ffdhe8192" 00:11:42.762 } 00:11:42.762 } 00:11:42.762 ]' 00:11:42.762 17:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.019 17:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.019 17:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.019 17:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:43.019 17:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.019 17:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.019 17:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.019 17:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.277 17:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.213 17:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.166 00:11:45.166 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.166 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.166 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.166 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.166 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.166 17:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.166 17:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.166 17:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.166 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.166 { 00:11:45.166 "cntlid": 43, 00:11:45.166 "qid": 0, 00:11:45.166 "state": "enabled", 00:11:45.166 "thread": "nvmf_tgt_poll_group_000", 00:11:45.166 "listen_address": { 00:11:45.166 "trtype": "TCP", 00:11:45.166 "adrfam": "IPv4", 00:11:45.167 "traddr": "10.0.0.2", 00:11:45.167 "trsvcid": "4420" 00:11:45.167 }, 00:11:45.167 "peer_address": { 00:11:45.167 "trtype": "TCP", 00:11:45.167 "adrfam": "IPv4", 00:11:45.167 "traddr": "10.0.0.1", 00:11:45.167 "trsvcid": "40160" 00:11:45.167 }, 00:11:45.167 "auth": { 00:11:45.167 "state": "completed", 00:11:45.167 "digest": "sha256", 00:11:45.167 "dhgroup": "ffdhe8192" 00:11:45.167 } 00:11:45.167 } 00:11:45.167 ]' 00:11:45.167 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.167 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:45.167 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.425 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:45.425 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.425 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.425 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.425 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.683 17:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:11:46.251 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.251 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:46.251 17:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.251 17:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.251 17:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.251 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.251 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:46.251 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.816 17:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.383 00:11:47.384 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.384 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.384 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.642 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.642 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.642 17:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.642 17:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.642 17:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.642 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.643 { 00:11:47.643 "cntlid": 45, 00:11:47.643 "qid": 0, 00:11:47.643 "state": "enabled", 00:11:47.643 "thread": "nvmf_tgt_poll_group_000", 00:11:47.643 "listen_address": { 00:11:47.643 "trtype": "TCP", 00:11:47.643 "adrfam": "IPv4", 00:11:47.643 "traddr": "10.0.0.2", 00:11:47.643 "trsvcid": "4420" 00:11:47.643 }, 00:11:47.643 "peer_address": { 00:11:47.643 "trtype": "TCP", 00:11:47.643 "adrfam": "IPv4", 00:11:47.643 "traddr": "10.0.0.1", 00:11:47.643 "trsvcid": "40192" 00:11:47.643 }, 00:11:47.643 "auth": { 00:11:47.643 "state": "completed", 00:11:47.643 "digest": "sha256", 00:11:47.643 "dhgroup": "ffdhe8192" 00:11:47.643 } 00:11:47.643 } 00:11:47.643 ]' 00:11:47.643 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.643 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:47.643 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.643 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:47.643 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.643 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.643 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.643 17:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.901 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:11:48.476 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.476 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:48.476 17:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.476 17:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.476 17:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.476 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.476 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:48.476 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:48.734 17:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.301 00:11:49.301 17:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.301 17:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.301 17:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.866 17:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.866 17:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.867 17:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.867 17:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 17:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.867 17:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.867 { 00:11:49.867 "cntlid": 47, 00:11:49.867 "qid": 0, 00:11:49.867 "state": "enabled", 00:11:49.867 "thread": "nvmf_tgt_poll_group_000", 00:11:49.867 "listen_address": { 00:11:49.867 "trtype": "TCP", 00:11:49.867 "adrfam": "IPv4", 00:11:49.867 "traddr": "10.0.0.2", 00:11:49.867 "trsvcid": "4420" 00:11:49.867 }, 00:11:49.867 "peer_address": { 00:11:49.867 "trtype": "TCP", 00:11:49.867 "adrfam": "IPv4", 00:11:49.867 "traddr": "10.0.0.1", 00:11:49.867 "trsvcid": "40208" 00:11:49.867 }, 00:11:49.867 "auth": { 00:11:49.867 "state": "completed", 00:11:49.867 "digest": "sha256", 00:11:49.867 "dhgroup": "ffdhe8192" 00:11:49.867 } 00:11:49.867 } 00:11:49.867 ]' 00:11:49.867 17:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.867 17:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:49.867 17:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.867 17:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:49.867 17:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.867 17:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.867 17:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.867 17:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.125 17:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:11:50.691 17:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.691 17:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:50.691 17:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.692 17:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.692 17:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.692 17:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:50.692 17:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.692 17:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.692 17:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:50.692 17:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.963 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.224 00:11:51.483 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.483 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.483 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.741 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.741 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.741 17:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.741 17:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.741 17:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.741 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.741 { 00:11:51.741 "cntlid": 49, 00:11:51.741 "qid": 0, 00:11:51.741 "state": "enabled", 00:11:51.741 "thread": "nvmf_tgt_poll_group_000", 00:11:51.741 "listen_address": { 00:11:51.741 "trtype": "TCP", 00:11:51.741 "adrfam": "IPv4", 00:11:51.741 "traddr": "10.0.0.2", 00:11:51.741 "trsvcid": "4420" 00:11:51.741 }, 00:11:51.741 "peer_address": { 00:11:51.741 "trtype": "TCP", 00:11:51.741 "adrfam": "IPv4", 00:11:51.741 "traddr": "10.0.0.1", 00:11:51.741 "trsvcid": "39376" 00:11:51.741 }, 00:11:51.741 "auth": { 00:11:51.741 "state": "completed", 00:11:51.741 "digest": "sha384", 00:11:51.741 "dhgroup": "null" 00:11:51.741 } 00:11:51.741 } 00:11:51.741 ]' 00:11:51.741 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.741 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.741 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.741 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:51.741 17:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.741 17:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.741 17:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.741 17:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.312 17:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:11:52.883 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.883 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:52.883 17:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.883 17:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 17:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.883 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.883 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:52.883 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.141 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.411 00:11:53.411 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.411 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.411 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.670 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.670 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.670 17:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.670 17:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.670 17:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.670 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.670 { 00:11:53.670 "cntlid": 51, 00:11:53.670 "qid": 0, 00:11:53.670 "state": "enabled", 00:11:53.670 "thread": "nvmf_tgt_poll_group_000", 00:11:53.670 "listen_address": { 00:11:53.670 "trtype": "TCP", 00:11:53.670 "adrfam": "IPv4", 00:11:53.670 "traddr": "10.0.0.2", 00:11:53.670 "trsvcid": "4420" 00:11:53.670 }, 00:11:53.670 "peer_address": { 00:11:53.670 "trtype": "TCP", 00:11:53.670 "adrfam": "IPv4", 00:11:53.670 "traddr": "10.0.0.1", 00:11:53.670 "trsvcid": "39402" 00:11:53.670 }, 00:11:53.670 "auth": { 00:11:53.670 "state": "completed", 00:11:53.670 "digest": "sha384", 00:11:53.670 "dhgroup": "null" 00:11:53.670 } 00:11:53.670 } 00:11:53.670 ]' 00:11:53.670 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.935 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.935 17:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.935 17:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:53.935 17:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.935 17:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.935 17:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.935 17:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.195 17:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:11:55.131 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.131 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:55.131 17:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.131 17:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.131 17:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.131 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:55.131 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:55.131 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.398 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.656 00:11:55.656 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.656 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.656 17:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.915 { 00:11:55.915 "cntlid": 53, 00:11:55.915 "qid": 0, 00:11:55.915 "state": "enabled", 00:11:55.915 "thread": "nvmf_tgt_poll_group_000", 00:11:55.915 "listen_address": { 00:11:55.915 "trtype": "TCP", 00:11:55.915 "adrfam": "IPv4", 00:11:55.915 "traddr": "10.0.0.2", 00:11:55.915 "trsvcid": "4420" 00:11:55.915 }, 00:11:55.915 "peer_address": { 00:11:55.915 "trtype": "TCP", 00:11:55.915 "adrfam": "IPv4", 00:11:55.915 "traddr": "10.0.0.1", 00:11:55.915 "trsvcid": "39434" 00:11:55.915 }, 00:11:55.915 "auth": { 00:11:55.915 "state": "completed", 00:11:55.915 "digest": "sha384", 00:11:55.915 "dhgroup": "null" 00:11:55.915 } 00:11:55.915 } 00:11:55.915 ]' 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.915 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.175 17:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.115 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.376 00:11:57.376 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.376 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.376 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.942 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.942 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.942 17:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.942 17:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.942 17:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.942 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.942 { 00:11:57.942 "cntlid": 55, 00:11:57.942 "qid": 0, 00:11:57.942 "state": "enabled", 00:11:57.942 "thread": "nvmf_tgt_poll_group_000", 00:11:57.942 "listen_address": { 00:11:57.942 "trtype": "TCP", 00:11:57.942 "adrfam": "IPv4", 00:11:57.942 "traddr": "10.0.0.2", 00:11:57.942 "trsvcid": "4420" 00:11:57.942 }, 00:11:57.942 "peer_address": { 00:11:57.942 "trtype": "TCP", 00:11:57.942 "adrfam": "IPv4", 00:11:57.943 "traddr": "10.0.0.1", 00:11:57.943 "trsvcid": "39466" 00:11:57.943 }, 00:11:57.943 "auth": { 00:11:57.943 "state": "completed", 00:11:57.943 "digest": "sha384", 00:11:57.943 "dhgroup": "null" 00:11:57.943 } 00:11:57.943 } 00:11:57.943 ]' 00:11:57.943 17:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.943 17:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.943 17:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.943 17:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:57.943 17:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.943 17:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.943 17:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.943 17:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.201 17:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:11:58.770 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.770 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:11:58.770 17:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.770 17:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.770 17:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.770 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.770 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.770 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:58.770 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:59.336 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:59.337 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.337 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:59.337 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:59.337 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:59.337 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.337 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.337 17:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.337 17:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.337 17:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.337 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.337 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.594 00:11:59.594 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.594 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.594 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.852 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.852 17:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.852 17:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.852 17:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.852 17:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.852 17:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.852 { 00:11:59.852 "cntlid": 57, 00:11:59.852 "qid": 0, 00:11:59.852 "state": "enabled", 00:11:59.852 "thread": "nvmf_tgt_poll_group_000", 00:11:59.852 "listen_address": { 00:11:59.852 "trtype": "TCP", 00:11:59.852 "adrfam": "IPv4", 00:11:59.852 "traddr": "10.0.0.2", 00:11:59.852 "trsvcid": "4420" 00:11:59.852 }, 00:11:59.852 "peer_address": { 00:11:59.852 "trtype": "TCP", 00:11:59.852 "adrfam": "IPv4", 00:11:59.852 "traddr": "10.0.0.1", 00:11:59.852 "trsvcid": "49636" 00:11:59.852 }, 00:11:59.852 "auth": { 00:11:59.852 "state": "completed", 00:11:59.852 "digest": "sha384", 00:11:59.852 "dhgroup": "ffdhe2048" 00:11:59.852 } 00:11:59.852 } 00:11:59.852 ]' 00:11:59.852 17:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.852 17:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.852 17:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.852 17:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:59.852 17:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.852 17:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.852 17:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.852 17:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.418 17:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:12:00.983 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.983 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:00.983 17:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.983 17:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 17:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.983 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.983 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:00.983 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.241 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.806 00:12:01.806 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.806 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.806 17:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.806 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.806 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.806 17:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.806 17:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.064 17:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.064 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.064 { 00:12:02.064 "cntlid": 59, 00:12:02.064 "qid": 0, 00:12:02.064 "state": "enabled", 00:12:02.064 "thread": "nvmf_tgt_poll_group_000", 00:12:02.064 "listen_address": { 00:12:02.064 "trtype": "TCP", 00:12:02.064 "adrfam": "IPv4", 00:12:02.064 "traddr": "10.0.0.2", 00:12:02.064 "trsvcid": "4420" 00:12:02.064 }, 00:12:02.064 "peer_address": { 00:12:02.064 "trtype": "TCP", 00:12:02.064 "adrfam": "IPv4", 00:12:02.064 "traddr": "10.0.0.1", 00:12:02.064 "trsvcid": "49660" 00:12:02.064 }, 00:12:02.064 "auth": { 00:12:02.064 "state": "completed", 00:12:02.064 "digest": "sha384", 00:12:02.064 "dhgroup": "ffdhe2048" 00:12:02.064 } 00:12:02.064 } 00:12:02.064 ]' 00:12:02.064 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.064 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.064 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.064 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:02.064 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.064 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.064 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.064 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.323 17:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:12:02.892 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.892 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:02.892 17:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.892 17:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.892 17:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.892 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.892 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:02.892 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.164 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.734 00:12:03.734 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.734 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.734 17:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.991 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.991 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.991 17:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.991 17:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.991 17:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.991 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.991 { 00:12:03.991 "cntlid": 61, 00:12:03.991 "qid": 0, 00:12:03.991 "state": "enabled", 00:12:03.991 "thread": "nvmf_tgt_poll_group_000", 00:12:03.991 "listen_address": { 00:12:03.991 "trtype": "TCP", 00:12:03.991 "adrfam": "IPv4", 00:12:03.991 "traddr": "10.0.0.2", 00:12:03.991 "trsvcid": "4420" 00:12:03.991 }, 00:12:03.991 "peer_address": { 00:12:03.991 "trtype": "TCP", 00:12:03.991 "adrfam": "IPv4", 00:12:03.991 "traddr": "10.0.0.1", 00:12:03.991 "trsvcid": "49674" 00:12:03.992 }, 00:12:03.992 "auth": { 00:12:03.992 "state": "completed", 00:12:03.992 "digest": "sha384", 00:12:03.992 "dhgroup": "ffdhe2048" 00:12:03.992 } 00:12:03.992 } 00:12:03.992 ]' 00:12:03.992 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.992 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.992 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.992 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:03.992 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.992 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.992 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.992 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.249 17:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.181 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.454 00:12:05.454 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.454 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.454 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.714 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.714 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.714 17:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.714 17:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.714 17:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.714 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.714 { 00:12:05.714 "cntlid": 63, 00:12:05.714 "qid": 0, 00:12:05.714 "state": "enabled", 00:12:05.714 "thread": "nvmf_tgt_poll_group_000", 00:12:05.714 "listen_address": { 00:12:05.714 "trtype": "TCP", 00:12:05.714 "adrfam": "IPv4", 00:12:05.714 "traddr": "10.0.0.2", 00:12:05.714 "trsvcid": "4420" 00:12:05.714 }, 00:12:05.714 "peer_address": { 00:12:05.714 "trtype": "TCP", 00:12:05.714 "adrfam": "IPv4", 00:12:05.714 "traddr": "10.0.0.1", 00:12:05.714 "trsvcid": "49688" 00:12:05.714 }, 00:12:05.714 "auth": { 00:12:05.714 "state": "completed", 00:12:05.714 "digest": "sha384", 00:12:05.714 "dhgroup": "ffdhe2048" 00:12:05.714 } 00:12:05.714 } 00:12:05.714 ]' 00:12:05.714 17:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.971 17:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.971 17:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.971 17:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:05.971 17:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.971 17:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.971 17:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.971 17:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.229 17:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:12:06.796 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.796 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:06.796 17:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.796 17:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.796 17:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.796 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:06.796 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.796 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:06.796 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.363 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.629 00:12:07.629 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.629 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.629 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.893 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.893 17:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.893 17:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.893 17:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.893 17:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.893 17:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.893 { 00:12:07.893 "cntlid": 65, 00:12:07.893 "qid": 0, 00:12:07.893 "state": "enabled", 00:12:07.893 "thread": "nvmf_tgt_poll_group_000", 00:12:07.894 "listen_address": { 00:12:07.894 "trtype": "TCP", 00:12:07.894 "adrfam": "IPv4", 00:12:07.894 "traddr": "10.0.0.2", 00:12:07.894 "trsvcid": "4420" 00:12:07.894 }, 00:12:07.894 "peer_address": { 00:12:07.894 "trtype": "TCP", 00:12:07.894 "adrfam": "IPv4", 00:12:07.894 "traddr": "10.0.0.1", 00:12:07.894 "trsvcid": "49716" 00:12:07.894 }, 00:12:07.894 "auth": { 00:12:07.894 "state": "completed", 00:12:07.894 "digest": "sha384", 00:12:07.894 "dhgroup": "ffdhe3072" 00:12:07.894 } 00:12:07.894 } 00:12:07.894 ]' 00:12:07.894 17:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.894 17:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.894 17:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.894 17:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:07.894 17:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.894 17:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.894 17:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.894 17:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.458 17:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:12:09.029 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.029 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:09.029 17:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.029 17:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.029 17:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.029 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.029 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:09.029 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.285 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.543 00:12:09.543 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.543 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.543 17:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.801 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.801 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.801 17:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.801 17:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.801 17:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.801 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.801 { 00:12:09.801 "cntlid": 67, 00:12:09.801 "qid": 0, 00:12:09.801 "state": "enabled", 00:12:09.801 "thread": "nvmf_tgt_poll_group_000", 00:12:09.801 "listen_address": { 00:12:09.801 "trtype": "TCP", 00:12:09.801 "adrfam": "IPv4", 00:12:09.801 "traddr": "10.0.0.2", 00:12:09.801 "trsvcid": "4420" 00:12:09.801 }, 00:12:09.801 "peer_address": { 00:12:09.801 "trtype": "TCP", 00:12:09.801 "adrfam": "IPv4", 00:12:09.801 "traddr": "10.0.0.1", 00:12:09.801 "trsvcid": "50344" 00:12:09.801 }, 00:12:09.801 "auth": { 00:12:09.801 "state": "completed", 00:12:09.801 "digest": "sha384", 00:12:09.801 "dhgroup": "ffdhe3072" 00:12:09.801 } 00:12:09.801 } 00:12:09.801 ]' 00:12:09.801 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.801 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.801 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.059 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:10.059 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.059 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.059 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.059 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.316 17:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:12:10.881 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.881 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:10.881 17:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.881 17:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.881 17:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.881 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.881 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:10.881 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.139 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.397 00:12:11.397 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.397 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.397 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.655 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.655 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.655 17:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.655 17:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.655 17:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.655 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.655 { 00:12:11.655 "cntlid": 69, 00:12:11.655 "qid": 0, 00:12:11.655 "state": "enabled", 00:12:11.655 "thread": "nvmf_tgt_poll_group_000", 00:12:11.655 "listen_address": { 00:12:11.655 "trtype": "TCP", 00:12:11.655 "adrfam": "IPv4", 00:12:11.655 "traddr": "10.0.0.2", 00:12:11.655 "trsvcid": "4420" 00:12:11.655 }, 00:12:11.655 "peer_address": { 00:12:11.655 "trtype": "TCP", 00:12:11.655 "adrfam": "IPv4", 00:12:11.655 "traddr": "10.0.0.1", 00:12:11.655 "trsvcid": "50380" 00:12:11.655 }, 00:12:11.655 "auth": { 00:12:11.655 "state": "completed", 00:12:11.655 "digest": "sha384", 00:12:11.655 "dhgroup": "ffdhe3072" 00:12:11.655 } 00:12:11.655 } 00:12:11.655 ]' 00:12:11.655 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.912 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.912 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.912 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:11.912 17:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.912 17:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.912 17:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.912 17:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.169 17:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:12:12.734 17:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.734 17:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:12.734 17:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.734 17:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.734 17:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.734 17:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.734 17:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:12.734 17:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:12.992 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:13.313 00:12:13.313 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.313 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.313 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.571 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.571 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.571 17:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.571 17:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.827 17:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.827 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.827 { 00:12:13.827 "cntlid": 71, 00:12:13.827 "qid": 0, 00:12:13.827 "state": "enabled", 00:12:13.827 "thread": "nvmf_tgt_poll_group_000", 00:12:13.827 "listen_address": { 00:12:13.827 "trtype": "TCP", 00:12:13.827 "adrfam": "IPv4", 00:12:13.827 "traddr": "10.0.0.2", 00:12:13.827 "trsvcid": "4420" 00:12:13.827 }, 00:12:13.827 "peer_address": { 00:12:13.827 "trtype": "TCP", 00:12:13.827 "adrfam": "IPv4", 00:12:13.827 "traddr": "10.0.0.1", 00:12:13.827 "trsvcid": "50420" 00:12:13.827 }, 00:12:13.827 "auth": { 00:12:13.827 "state": "completed", 00:12:13.827 "digest": "sha384", 00:12:13.827 "dhgroup": "ffdhe3072" 00:12:13.827 } 00:12:13.827 } 00:12:13.827 ]' 00:12:13.827 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.827 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.827 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.827 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:13.827 17:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.827 17:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.827 17:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.827 17:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.085 17:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:12:15.019 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.019 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:15.019 17:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.019 17:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.019 17:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.019 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:15.019 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.019 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:15.019 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.276 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.534 00:12:15.534 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.534 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.534 17:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.793 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.793 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.793 17:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.793 17:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.793 17:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.793 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.793 { 00:12:15.793 "cntlid": 73, 00:12:15.793 "qid": 0, 00:12:15.793 "state": "enabled", 00:12:15.793 "thread": "nvmf_tgt_poll_group_000", 00:12:15.793 "listen_address": { 00:12:15.793 "trtype": "TCP", 00:12:15.793 "adrfam": "IPv4", 00:12:15.793 "traddr": "10.0.0.2", 00:12:15.793 "trsvcid": "4420" 00:12:15.793 }, 00:12:15.793 "peer_address": { 00:12:15.793 "trtype": "TCP", 00:12:15.793 "adrfam": "IPv4", 00:12:15.793 "traddr": "10.0.0.1", 00:12:15.793 "trsvcid": "50428" 00:12:15.793 }, 00:12:15.793 "auth": { 00:12:15.793 "state": "completed", 00:12:15.793 "digest": "sha384", 00:12:15.793 "dhgroup": "ffdhe4096" 00:12:15.793 } 00:12:15.793 } 00:12:15.793 ]' 00:12:15.793 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.051 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.051 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.051 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.051 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.051 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.051 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.051 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.309 17:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:12:16.875 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.875 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:16.875 17:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.875 17:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.875 17:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.875 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.875 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:16.875 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.163 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.735 00:12:17.735 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.735 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.735 17:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.993 { 00:12:17.993 "cntlid": 75, 00:12:17.993 "qid": 0, 00:12:17.993 "state": "enabled", 00:12:17.993 "thread": "nvmf_tgt_poll_group_000", 00:12:17.993 "listen_address": { 00:12:17.993 "trtype": "TCP", 00:12:17.993 "adrfam": "IPv4", 00:12:17.993 "traddr": "10.0.0.2", 00:12:17.993 "trsvcid": "4420" 00:12:17.993 }, 00:12:17.993 "peer_address": { 00:12:17.993 "trtype": "TCP", 00:12:17.993 "adrfam": "IPv4", 00:12:17.993 "traddr": "10.0.0.1", 00:12:17.993 "trsvcid": "50464" 00:12:17.993 }, 00:12:17.993 "auth": { 00:12:17.993 "state": "completed", 00:12:17.993 "digest": "sha384", 00:12:17.993 "dhgroup": "ffdhe4096" 00:12:17.993 } 00:12:17.993 } 00:12:17.993 ]' 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.993 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.251 17:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:12:18.815 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.815 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:18.815 17:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.815 17:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.815 17:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.815 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.815 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:18.815 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.073 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.637 00:12:19.637 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.637 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.637 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.638 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.638 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.638 17:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.638 17:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.895 17:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.895 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.895 { 00:12:19.895 "cntlid": 77, 00:12:19.895 "qid": 0, 00:12:19.895 "state": "enabled", 00:12:19.895 "thread": "nvmf_tgt_poll_group_000", 00:12:19.895 "listen_address": { 00:12:19.895 "trtype": "TCP", 00:12:19.895 "adrfam": "IPv4", 00:12:19.895 "traddr": "10.0.0.2", 00:12:19.895 "trsvcid": "4420" 00:12:19.895 }, 00:12:19.895 "peer_address": { 00:12:19.895 "trtype": "TCP", 00:12:19.895 "adrfam": "IPv4", 00:12:19.895 "traddr": "10.0.0.1", 00:12:19.895 "trsvcid": "39378" 00:12:19.895 }, 00:12:19.895 "auth": { 00:12:19.895 "state": "completed", 00:12:19.895 "digest": "sha384", 00:12:19.895 "dhgroup": "ffdhe4096" 00:12:19.895 } 00:12:19.895 } 00:12:19.895 ]' 00:12:19.895 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.895 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.895 17:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.895 17:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:19.895 17:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.895 17:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.895 17:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.895 17:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.152 17:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:12:20.779 17:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.779 17:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:20.779 17:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.779 17:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.779 17:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.779 17:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.779 17:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:20.779 17:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:21.057 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:21.624 00:12:21.624 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.624 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.624 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.895 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.895 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.895 17:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.895 17:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.895 17:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.895 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.895 { 00:12:21.895 "cntlid": 79, 00:12:21.895 "qid": 0, 00:12:21.895 "state": "enabled", 00:12:21.895 "thread": "nvmf_tgt_poll_group_000", 00:12:21.895 "listen_address": { 00:12:21.895 "trtype": "TCP", 00:12:21.895 "adrfam": "IPv4", 00:12:21.895 "traddr": "10.0.0.2", 00:12:21.895 "trsvcid": "4420" 00:12:21.895 }, 00:12:21.895 "peer_address": { 00:12:21.895 "trtype": "TCP", 00:12:21.895 "adrfam": "IPv4", 00:12:21.895 "traddr": "10.0.0.1", 00:12:21.895 "trsvcid": "39412" 00:12:21.895 }, 00:12:21.895 "auth": { 00:12:21.895 "state": "completed", 00:12:21.895 "digest": "sha384", 00:12:21.895 "dhgroup": "ffdhe4096" 00:12:21.895 } 00:12:21.895 } 00:12:21.895 ]' 00:12:21.895 17:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.895 17:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.895 17:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.895 17:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:21.895 17:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.895 17:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.895 17:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.895 17:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.158 17:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:12:22.724 17:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.724 17:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:22.724 17:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.724 17:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.724 17:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.724 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:22.724 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.724 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:22.724 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.983 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.548 00:12:23.548 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.548 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.548 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.809 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.809 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.809 17:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.809 17:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.809 17:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.809 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.809 { 00:12:23.809 "cntlid": 81, 00:12:23.809 "qid": 0, 00:12:23.809 "state": "enabled", 00:12:23.809 "thread": "nvmf_tgt_poll_group_000", 00:12:23.809 "listen_address": { 00:12:23.809 "trtype": "TCP", 00:12:23.809 "adrfam": "IPv4", 00:12:23.809 "traddr": "10.0.0.2", 00:12:23.809 "trsvcid": "4420" 00:12:23.809 }, 00:12:23.809 "peer_address": { 00:12:23.809 "trtype": "TCP", 00:12:23.809 "adrfam": "IPv4", 00:12:23.809 "traddr": "10.0.0.1", 00:12:23.809 "trsvcid": "39438" 00:12:23.809 }, 00:12:23.809 "auth": { 00:12:23.809 "state": "completed", 00:12:23.809 "digest": "sha384", 00:12:23.809 "dhgroup": "ffdhe6144" 00:12:23.809 } 00:12:23.809 } 00:12:23.809 ]' 00:12:23.809 17:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.809 17:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.809 17:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.809 17:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:23.809 17:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.068 17:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.068 17:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.068 17:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.326 17:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:12:24.892 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.892 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:24.892 17:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.892 17:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.892 17:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.892 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.892 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:24.892 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.151 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.717 00:12:25.717 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.717 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.717 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.717 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.717 17:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.717 17:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.717 17:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.717 17:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.717 17:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.717 { 00:12:25.717 "cntlid": 83, 00:12:25.717 "qid": 0, 00:12:25.717 "state": "enabled", 00:12:25.717 "thread": "nvmf_tgt_poll_group_000", 00:12:25.717 "listen_address": { 00:12:25.717 "trtype": "TCP", 00:12:25.717 "adrfam": "IPv4", 00:12:25.717 "traddr": "10.0.0.2", 00:12:25.717 "trsvcid": "4420" 00:12:25.717 }, 00:12:25.717 "peer_address": { 00:12:25.717 "trtype": "TCP", 00:12:25.717 "adrfam": "IPv4", 00:12:25.717 "traddr": "10.0.0.1", 00:12:25.717 "trsvcid": "39476" 00:12:25.717 }, 00:12:25.717 "auth": { 00:12:25.717 "state": "completed", 00:12:25.717 "digest": "sha384", 00:12:25.717 "dhgroup": "ffdhe6144" 00:12:25.717 } 00:12:25.717 } 00:12:25.717 ]' 00:12:25.717 17:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.975 17:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.975 17:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.975 17:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:25.975 17:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.975 17:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.975 17:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.975 17:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.233 17:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:12:26.799 17:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.799 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:26.799 17:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.799 17:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.799 17:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.799 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.799 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:26.799 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.057 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.622 00:12:27.622 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.622 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.622 17:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.880 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.880 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.880 17:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.880 17:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.880 17:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.880 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.880 { 00:12:27.880 "cntlid": 85, 00:12:27.880 "qid": 0, 00:12:27.880 "state": "enabled", 00:12:27.880 "thread": "nvmf_tgt_poll_group_000", 00:12:27.880 "listen_address": { 00:12:27.880 "trtype": "TCP", 00:12:27.880 "adrfam": "IPv4", 00:12:27.880 "traddr": "10.0.0.2", 00:12:27.880 "trsvcid": "4420" 00:12:27.880 }, 00:12:27.880 "peer_address": { 00:12:27.880 "trtype": "TCP", 00:12:27.880 "adrfam": "IPv4", 00:12:27.880 "traddr": "10.0.0.1", 00:12:27.880 "trsvcid": "39488" 00:12:27.880 }, 00:12:27.880 "auth": { 00:12:27.880 "state": "completed", 00:12:27.880 "digest": "sha384", 00:12:27.880 "dhgroup": "ffdhe6144" 00:12:27.880 } 00:12:27.880 } 00:12:27.880 ]' 00:12:27.880 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.880 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.880 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.880 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:27.880 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.138 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.138 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.138 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.415 17:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:12:28.981 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.981 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:28.981 17:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.981 17:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.981 17:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.981 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.981 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:28.981 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:29.239 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:29.805 00:12:29.805 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.805 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.805 17:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.063 { 00:12:30.063 "cntlid": 87, 00:12:30.063 "qid": 0, 00:12:30.063 "state": "enabled", 00:12:30.063 "thread": "nvmf_tgt_poll_group_000", 00:12:30.063 "listen_address": { 00:12:30.063 "trtype": "TCP", 00:12:30.063 "adrfam": "IPv4", 00:12:30.063 "traddr": "10.0.0.2", 00:12:30.063 "trsvcid": "4420" 00:12:30.063 }, 00:12:30.063 "peer_address": { 00:12:30.063 "trtype": "TCP", 00:12:30.063 "adrfam": "IPv4", 00:12:30.063 "traddr": "10.0.0.1", 00:12:30.063 "trsvcid": "46706" 00:12:30.063 }, 00:12:30.063 "auth": { 00:12:30.063 "state": "completed", 00:12:30.063 "digest": "sha384", 00:12:30.063 "dhgroup": "ffdhe6144" 00:12:30.063 } 00:12:30.063 } 00:12:30.063 ]' 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.063 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.321 17:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:12:31.256 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.256 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:31.256 17:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.256 17:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.256 17:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.256 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:31.256 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.256 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:31.256 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:31.514 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:31.514 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.514 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:31.514 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:31.514 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:31.514 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.514 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.514 17:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.514 17:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.514 17:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.515 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.515 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.126 00:12:32.126 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.126 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.126 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.384 { 00:12:32.384 "cntlid": 89, 00:12:32.384 "qid": 0, 00:12:32.384 "state": "enabled", 00:12:32.384 "thread": "nvmf_tgt_poll_group_000", 00:12:32.384 "listen_address": { 00:12:32.384 "trtype": "TCP", 00:12:32.384 "adrfam": "IPv4", 00:12:32.384 "traddr": "10.0.0.2", 00:12:32.384 "trsvcid": "4420" 00:12:32.384 }, 00:12:32.384 "peer_address": { 00:12:32.384 "trtype": "TCP", 00:12:32.384 "adrfam": "IPv4", 00:12:32.384 "traddr": "10.0.0.1", 00:12:32.384 "trsvcid": "46754" 00:12:32.384 }, 00:12:32.384 "auth": { 00:12:32.384 "state": "completed", 00:12:32.384 "digest": "sha384", 00:12:32.384 "dhgroup": "ffdhe8192" 00:12:32.384 } 00:12:32.384 } 00:12:32.384 ]' 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.384 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.643 17:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.576 17:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.141 00:12:34.399 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.399 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.399 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.658 { 00:12:34.658 "cntlid": 91, 00:12:34.658 "qid": 0, 00:12:34.658 "state": "enabled", 00:12:34.658 "thread": "nvmf_tgt_poll_group_000", 00:12:34.658 "listen_address": { 00:12:34.658 "trtype": "TCP", 00:12:34.658 "adrfam": "IPv4", 00:12:34.658 "traddr": "10.0.0.2", 00:12:34.658 "trsvcid": "4420" 00:12:34.658 }, 00:12:34.658 "peer_address": { 00:12:34.658 "trtype": "TCP", 00:12:34.658 "adrfam": "IPv4", 00:12:34.658 "traddr": "10.0.0.1", 00:12:34.658 "trsvcid": "46788" 00:12:34.658 }, 00:12:34.658 "auth": { 00:12:34.658 "state": "completed", 00:12:34.658 "digest": "sha384", 00:12:34.658 "dhgroup": "ffdhe8192" 00:12:34.658 } 00:12:34.658 } 00:12:34.658 ]' 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.658 17:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.917 17:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:12:35.850 17:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.850 17:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:35.850 17:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.850 17:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.850 17:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.850 17:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.850 17:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:35.851 17:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.109 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.676 00:12:36.676 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.676 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.676 17:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.934 { 00:12:36.934 "cntlid": 93, 00:12:36.934 "qid": 0, 00:12:36.934 "state": "enabled", 00:12:36.934 "thread": "nvmf_tgt_poll_group_000", 00:12:36.934 "listen_address": { 00:12:36.934 "trtype": "TCP", 00:12:36.934 "adrfam": "IPv4", 00:12:36.934 "traddr": "10.0.0.2", 00:12:36.934 "trsvcid": "4420" 00:12:36.934 }, 00:12:36.934 "peer_address": { 00:12:36.934 "trtype": "TCP", 00:12:36.934 "adrfam": "IPv4", 00:12:36.934 "traddr": "10.0.0.1", 00:12:36.934 "trsvcid": "46810" 00:12:36.934 }, 00:12:36.934 "auth": { 00:12:36.934 "state": "completed", 00:12:36.934 "digest": "sha384", 00:12:36.934 "dhgroup": "ffdhe8192" 00:12:36.934 } 00:12:36.934 } 00:12:36.934 ]' 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.934 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.193 17:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:38.130 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:38.696 00:12:38.696 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.696 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.696 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.277 { 00:12:39.277 "cntlid": 95, 00:12:39.277 "qid": 0, 00:12:39.277 "state": "enabled", 00:12:39.277 "thread": "nvmf_tgt_poll_group_000", 00:12:39.277 "listen_address": { 00:12:39.277 "trtype": "TCP", 00:12:39.277 "adrfam": "IPv4", 00:12:39.277 "traddr": "10.0.0.2", 00:12:39.277 "trsvcid": "4420" 00:12:39.277 }, 00:12:39.277 "peer_address": { 00:12:39.277 "trtype": "TCP", 00:12:39.277 "adrfam": "IPv4", 00:12:39.277 "traddr": "10.0.0.1", 00:12:39.277 "trsvcid": "46842" 00:12:39.277 }, 00:12:39.277 "auth": { 00:12:39.277 "state": "completed", 00:12:39.277 "digest": "sha384", 00:12:39.277 "dhgroup": "ffdhe8192" 00:12:39.277 } 00:12:39.277 } 00:12:39.277 ]' 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.277 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.542 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.478 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.736 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.736 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.736 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.994 00:12:40.995 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.995 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.995 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.251 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.252 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.252 17:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.252 17:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.252 17:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.252 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.252 { 00:12:41.252 "cntlid": 97, 00:12:41.252 "qid": 0, 00:12:41.252 "state": "enabled", 00:12:41.252 "thread": "nvmf_tgt_poll_group_000", 00:12:41.252 "listen_address": { 00:12:41.252 "trtype": "TCP", 00:12:41.252 "adrfam": "IPv4", 00:12:41.252 "traddr": "10.0.0.2", 00:12:41.252 "trsvcid": "4420" 00:12:41.252 }, 00:12:41.252 "peer_address": { 00:12:41.252 "trtype": "TCP", 00:12:41.252 "adrfam": "IPv4", 00:12:41.252 "traddr": "10.0.0.1", 00:12:41.252 "trsvcid": "46878" 00:12:41.252 }, 00:12:41.252 "auth": { 00:12:41.252 "state": "completed", 00:12:41.252 "digest": "sha512", 00:12:41.252 "dhgroup": "null" 00:12:41.252 } 00:12:41.252 } 00:12:41.252 ]' 00:12:41.252 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.252 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.252 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.252 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:41.252 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.509 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.509 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.509 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.767 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:12:42.335 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.335 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:42.335 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.335 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.335 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.335 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.335 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:42.335 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.594 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.852 00:12:42.852 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.852 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.852 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.141 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.141 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.141 17:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.141 17:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.141 17:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.141 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.141 { 00:12:43.141 "cntlid": 99, 00:12:43.141 "qid": 0, 00:12:43.141 "state": "enabled", 00:12:43.141 "thread": "nvmf_tgt_poll_group_000", 00:12:43.141 "listen_address": { 00:12:43.141 "trtype": "TCP", 00:12:43.141 "adrfam": "IPv4", 00:12:43.141 "traddr": "10.0.0.2", 00:12:43.141 "trsvcid": "4420" 00:12:43.141 }, 00:12:43.141 "peer_address": { 00:12:43.141 "trtype": "TCP", 00:12:43.141 "adrfam": "IPv4", 00:12:43.141 "traddr": "10.0.0.1", 00:12:43.141 "trsvcid": "46910" 00:12:43.141 }, 00:12:43.141 "auth": { 00:12:43.141 "state": "completed", 00:12:43.141 "digest": "sha512", 00:12:43.141 "dhgroup": "null" 00:12:43.141 } 00:12:43.141 } 00:12:43.141 ]' 00:12:43.141 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.141 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.141 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.141 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:43.141 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.398 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.398 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.398 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.656 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:12:44.222 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.222 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:44.222 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.222 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.222 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.222 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.222 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:44.223 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.480 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.738 00:12:45.017 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.017 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.017 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.017 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.017 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.017 17:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.017 17:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.017 17:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.017 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.017 { 00:12:45.017 "cntlid": 101, 00:12:45.017 "qid": 0, 00:12:45.017 "state": "enabled", 00:12:45.017 "thread": "nvmf_tgt_poll_group_000", 00:12:45.017 "listen_address": { 00:12:45.017 "trtype": "TCP", 00:12:45.017 "adrfam": "IPv4", 00:12:45.017 "traddr": "10.0.0.2", 00:12:45.017 "trsvcid": "4420" 00:12:45.017 }, 00:12:45.017 "peer_address": { 00:12:45.017 "trtype": "TCP", 00:12:45.017 "adrfam": "IPv4", 00:12:45.017 "traddr": "10.0.0.1", 00:12:45.017 "trsvcid": "46938" 00:12:45.017 }, 00:12:45.017 "auth": { 00:12:45.017 "state": "completed", 00:12:45.017 "digest": "sha512", 00:12:45.017 "dhgroup": "null" 00:12:45.017 } 00:12:45.017 } 00:12:45.017 ]' 00:12:45.017 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.273 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.273 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.273 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:45.273 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.273 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.273 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.274 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.576 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:12:46.164 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.164 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:46.164 17:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.164 17:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.164 17:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.164 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.164 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:46.164 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.445 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:47.108 00:12:47.108 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.108 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.108 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.108 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.108 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.108 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.108 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.386 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.386 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.386 { 00:12:47.386 "cntlid": 103, 00:12:47.386 "qid": 0, 00:12:47.386 "state": "enabled", 00:12:47.386 "thread": "nvmf_tgt_poll_group_000", 00:12:47.386 "listen_address": { 00:12:47.386 "trtype": "TCP", 00:12:47.386 "adrfam": "IPv4", 00:12:47.386 "traddr": "10.0.0.2", 00:12:47.386 "trsvcid": "4420" 00:12:47.386 }, 00:12:47.386 "peer_address": { 00:12:47.386 "trtype": "TCP", 00:12:47.386 "adrfam": "IPv4", 00:12:47.386 "traddr": "10.0.0.1", 00:12:47.386 "trsvcid": "46968" 00:12:47.386 }, 00:12:47.386 "auth": { 00:12:47.386 "state": "completed", 00:12:47.386 "digest": "sha512", 00:12:47.386 "dhgroup": "null" 00:12:47.386 } 00:12:47.386 } 00:12:47.386 ]' 00:12:47.386 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.386 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.386 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.386 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:47.386 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.386 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.386 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.386 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.684 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:12:48.249 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.249 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:48.249 17:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.249 17:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.249 17:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.249 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.249 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.249 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:48.249 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.507 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.766 00:12:48.766 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.766 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.766 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.023 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.023 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.023 17:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.023 17:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.023 17:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.023 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.023 { 00:12:49.023 "cntlid": 105, 00:12:49.023 "qid": 0, 00:12:49.023 "state": "enabled", 00:12:49.023 "thread": "nvmf_tgt_poll_group_000", 00:12:49.023 "listen_address": { 00:12:49.023 "trtype": "TCP", 00:12:49.023 "adrfam": "IPv4", 00:12:49.023 "traddr": "10.0.0.2", 00:12:49.023 "trsvcid": "4420" 00:12:49.023 }, 00:12:49.023 "peer_address": { 00:12:49.023 "trtype": "TCP", 00:12:49.023 "adrfam": "IPv4", 00:12:49.023 "traddr": "10.0.0.1", 00:12:49.023 "trsvcid": "46996" 00:12:49.023 }, 00:12:49.023 "auth": { 00:12:49.023 "state": "completed", 00:12:49.023 "digest": "sha512", 00:12:49.023 "dhgroup": "ffdhe2048" 00:12:49.023 } 00:12:49.023 } 00:12:49.023 ]' 00:12:49.023 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.313 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.313 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.313 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:49.313 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.313 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.313 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.313 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.582 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:12:50.152 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.152 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:50.152 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.152 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.152 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.152 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.152 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:50.152 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.409 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.975 00:12:50.975 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.975 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.975 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.234 { 00:12:51.234 "cntlid": 107, 00:12:51.234 "qid": 0, 00:12:51.234 "state": "enabled", 00:12:51.234 "thread": "nvmf_tgt_poll_group_000", 00:12:51.234 "listen_address": { 00:12:51.234 "trtype": "TCP", 00:12:51.234 "adrfam": "IPv4", 00:12:51.234 "traddr": "10.0.0.2", 00:12:51.234 "trsvcid": "4420" 00:12:51.234 }, 00:12:51.234 "peer_address": { 00:12:51.234 "trtype": "TCP", 00:12:51.234 "adrfam": "IPv4", 00:12:51.234 "traddr": "10.0.0.1", 00:12:51.234 "trsvcid": "60324" 00:12:51.234 }, 00:12:51.234 "auth": { 00:12:51.234 "state": "completed", 00:12:51.234 "digest": "sha512", 00:12:51.234 "dhgroup": "ffdhe2048" 00:12:51.234 } 00:12:51.234 } 00:12:51.234 ]' 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.234 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.801 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:12:52.367 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.367 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:52.367 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.367 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.367 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.367 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.367 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:52.367 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.626 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.885 00:12:52.885 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.885 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.885 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.142 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.142 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.142 17:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.142 17:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.142 17:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.142 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.142 { 00:12:53.142 "cntlid": 109, 00:12:53.142 "qid": 0, 00:12:53.142 "state": "enabled", 00:12:53.142 "thread": "nvmf_tgt_poll_group_000", 00:12:53.142 "listen_address": { 00:12:53.142 "trtype": "TCP", 00:12:53.142 "adrfam": "IPv4", 00:12:53.142 "traddr": "10.0.0.2", 00:12:53.142 "trsvcid": "4420" 00:12:53.142 }, 00:12:53.142 "peer_address": { 00:12:53.142 "trtype": "TCP", 00:12:53.142 "adrfam": "IPv4", 00:12:53.142 "traddr": "10.0.0.1", 00:12:53.142 "trsvcid": "60350" 00:12:53.142 }, 00:12:53.142 "auth": { 00:12:53.142 "state": "completed", 00:12:53.142 "digest": "sha512", 00:12:53.142 "dhgroup": "ffdhe2048" 00:12:53.142 } 00:12:53.142 } 00:12:53.142 ]' 00:12:53.142 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.142 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.142 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.142 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:53.142 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.399 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.399 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.399 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.658 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:12:54.221 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.221 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:54.221 17:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.221 17:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.221 17:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.221 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.221 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:54.221 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:54.478 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:54.735 00:12:54.735 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:54.735 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:54.735 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.300 { 00:12:55.300 "cntlid": 111, 00:12:55.300 "qid": 0, 00:12:55.300 "state": "enabled", 00:12:55.300 "thread": "nvmf_tgt_poll_group_000", 00:12:55.300 "listen_address": { 00:12:55.300 "trtype": "TCP", 00:12:55.300 "adrfam": "IPv4", 00:12:55.300 "traddr": "10.0.0.2", 00:12:55.300 "trsvcid": "4420" 00:12:55.300 }, 00:12:55.300 "peer_address": { 00:12:55.300 "trtype": "TCP", 00:12:55.300 "adrfam": "IPv4", 00:12:55.300 "traddr": "10.0.0.1", 00:12:55.300 "trsvcid": "60382" 00:12:55.300 }, 00:12:55.300 "auth": { 00:12:55.300 "state": "completed", 00:12:55.300 "digest": "sha512", 00:12:55.300 "dhgroup": "ffdhe2048" 00:12:55.300 } 00:12:55.300 } 00:12:55.300 ]' 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.300 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.557 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:12:56.123 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.123 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:56.123 17:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.123 17:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.123 17:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.123 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.123 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.123 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:56.123 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.381 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.960 00:12:56.960 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.960 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.960 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.960 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.960 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.961 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.961 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.961 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.961 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.961 { 00:12:56.961 "cntlid": 113, 00:12:56.961 "qid": 0, 00:12:56.961 "state": "enabled", 00:12:56.961 "thread": "nvmf_tgt_poll_group_000", 00:12:56.961 "listen_address": { 00:12:56.961 "trtype": "TCP", 00:12:56.961 "adrfam": "IPv4", 00:12:56.961 "traddr": "10.0.0.2", 00:12:56.961 "trsvcid": "4420" 00:12:56.961 }, 00:12:56.961 "peer_address": { 00:12:56.961 "trtype": "TCP", 00:12:56.961 "adrfam": "IPv4", 00:12:56.961 "traddr": "10.0.0.1", 00:12:56.961 "trsvcid": "60414" 00:12:56.961 }, 00:12:56.961 "auth": { 00:12:56.961 "state": "completed", 00:12:56.961 "digest": "sha512", 00:12:56.961 "dhgroup": "ffdhe3072" 00:12:56.961 } 00:12:56.961 } 00:12:56.961 ]' 00:12:56.961 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.219 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.219 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.219 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:57.219 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:57.219 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.219 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.219 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.477 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:12:58.045 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.045 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:12:58.045 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.045 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.045 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.045 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.045 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:58.045 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.304 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.871 00:12:58.871 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.871 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.871 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.871 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.871 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.871 17:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.871 17:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.130 17:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.130 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.130 { 00:12:59.130 "cntlid": 115, 00:12:59.130 "qid": 0, 00:12:59.130 "state": "enabled", 00:12:59.130 "thread": "nvmf_tgt_poll_group_000", 00:12:59.130 "listen_address": { 00:12:59.130 "trtype": "TCP", 00:12:59.130 "adrfam": "IPv4", 00:12:59.130 "traddr": "10.0.0.2", 00:12:59.130 "trsvcid": "4420" 00:12:59.130 }, 00:12:59.130 "peer_address": { 00:12:59.130 "trtype": "TCP", 00:12:59.130 "adrfam": "IPv4", 00:12:59.130 "traddr": "10.0.0.1", 00:12:59.130 "trsvcid": "60444" 00:12:59.130 }, 00:12:59.130 "auth": { 00:12:59.130 "state": "completed", 00:12:59.130 "digest": "sha512", 00:12:59.130 "dhgroup": "ffdhe3072" 00:12:59.130 } 00:12:59.130 } 00:12:59.130 ]' 00:12:59.130 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.130 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.130 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.130 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:59.130 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.130 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.130 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.130 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.389 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.339 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.905 00:13:00.905 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.905 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.905 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.163 { 00:13:01.163 "cntlid": 117, 00:13:01.163 "qid": 0, 00:13:01.163 "state": "enabled", 00:13:01.163 "thread": "nvmf_tgt_poll_group_000", 00:13:01.163 "listen_address": { 00:13:01.163 "trtype": "TCP", 00:13:01.163 "adrfam": "IPv4", 00:13:01.163 "traddr": "10.0.0.2", 00:13:01.163 "trsvcid": "4420" 00:13:01.163 }, 00:13:01.163 "peer_address": { 00:13:01.163 "trtype": "TCP", 00:13:01.163 "adrfam": "IPv4", 00:13:01.163 "traddr": "10.0.0.1", 00:13:01.163 "trsvcid": "48992" 00:13:01.163 }, 00:13:01.163 "auth": { 00:13:01.163 "state": "completed", 00:13:01.163 "digest": "sha512", 00:13:01.163 "dhgroup": "ffdhe3072" 00:13:01.163 } 00:13:01.163 } 00:13:01.163 ]' 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.163 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.421 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:13:01.986 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.250 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:02.250 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.250 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.250 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.250 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.250 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:02.250 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.511 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.798 00:13:02.798 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.798 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:02.798 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.055 { 00:13:03.055 "cntlid": 119, 00:13:03.055 "qid": 0, 00:13:03.055 "state": "enabled", 00:13:03.055 "thread": "nvmf_tgt_poll_group_000", 00:13:03.055 "listen_address": { 00:13:03.055 "trtype": "TCP", 00:13:03.055 "adrfam": "IPv4", 00:13:03.055 "traddr": "10.0.0.2", 00:13:03.055 "trsvcid": "4420" 00:13:03.055 }, 00:13:03.055 "peer_address": { 00:13:03.055 "trtype": "TCP", 00:13:03.055 "adrfam": "IPv4", 00:13:03.055 "traddr": "10.0.0.1", 00:13:03.055 "trsvcid": "49022" 00:13:03.055 }, 00:13:03.055 "auth": { 00:13:03.055 "state": "completed", 00:13:03.055 "digest": "sha512", 00:13:03.055 "dhgroup": "ffdhe3072" 00:13:03.055 } 00:13:03.055 } 00:13:03.055 ]' 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.055 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.314 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.248 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.815 00:13:04.815 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:04.815 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:04.815 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.073 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.073 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.073 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.073 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.073 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.073 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.073 { 00:13:05.073 "cntlid": 121, 00:13:05.073 "qid": 0, 00:13:05.073 "state": "enabled", 00:13:05.073 "thread": "nvmf_tgt_poll_group_000", 00:13:05.073 "listen_address": { 00:13:05.073 "trtype": "TCP", 00:13:05.073 "adrfam": "IPv4", 00:13:05.073 "traddr": "10.0.0.2", 00:13:05.073 "trsvcid": "4420" 00:13:05.073 }, 00:13:05.073 "peer_address": { 00:13:05.073 "trtype": "TCP", 00:13:05.073 "adrfam": "IPv4", 00:13:05.073 "traddr": "10.0.0.1", 00:13:05.073 "trsvcid": "49048" 00:13:05.073 }, 00:13:05.073 "auth": { 00:13:05.073 "state": "completed", 00:13:05.073 "digest": "sha512", 00:13:05.073 "dhgroup": "ffdhe4096" 00:13:05.073 } 00:13:05.073 } 00:13:05.073 ]' 00:13:05.073 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.073 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.073 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.073 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:05.073 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.331 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.331 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.331 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.589 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:13:06.156 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.156 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:06.156 17:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.156 17:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.157 17:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.157 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.157 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:06.157 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:06.414 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:06.414 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.414 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:06.415 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:06.415 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:06.415 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.415 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.415 17:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.415 17:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.415 17:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.415 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.415 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.981 00:13:06.981 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.981 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.981 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.239 { 00:13:07.239 "cntlid": 123, 00:13:07.239 "qid": 0, 00:13:07.239 "state": "enabled", 00:13:07.239 "thread": "nvmf_tgt_poll_group_000", 00:13:07.239 "listen_address": { 00:13:07.239 "trtype": "TCP", 00:13:07.239 "adrfam": "IPv4", 00:13:07.239 "traddr": "10.0.0.2", 00:13:07.239 "trsvcid": "4420" 00:13:07.239 }, 00:13:07.239 "peer_address": { 00:13:07.239 "trtype": "TCP", 00:13:07.239 "adrfam": "IPv4", 00:13:07.239 "traddr": "10.0.0.1", 00:13:07.239 "trsvcid": "49064" 00:13:07.239 }, 00:13:07.239 "auth": { 00:13:07.239 "state": "completed", 00:13:07.239 "digest": "sha512", 00:13:07.239 "dhgroup": "ffdhe4096" 00:13:07.239 } 00:13:07.239 } 00:13:07.239 ]' 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.239 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.498 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.498 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.756 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:13:08.321 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.321 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:08.321 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.321 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.321 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.321 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.321 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:08.321 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.578 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.836 00:13:08.836 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.836 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.836 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.093 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.093 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.093 17:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.093 17:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.093 17:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.093 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.093 { 00:13:09.093 "cntlid": 125, 00:13:09.093 "qid": 0, 00:13:09.093 "state": "enabled", 00:13:09.093 "thread": "nvmf_tgt_poll_group_000", 00:13:09.093 "listen_address": { 00:13:09.093 "trtype": "TCP", 00:13:09.093 "adrfam": "IPv4", 00:13:09.093 "traddr": "10.0.0.2", 00:13:09.093 "trsvcid": "4420" 00:13:09.093 }, 00:13:09.093 "peer_address": { 00:13:09.093 "trtype": "TCP", 00:13:09.093 "adrfam": "IPv4", 00:13:09.093 "traddr": "10.0.0.1", 00:13:09.093 "trsvcid": "49086" 00:13:09.093 }, 00:13:09.093 "auth": { 00:13:09.093 "state": "completed", 00:13:09.093 "digest": "sha512", 00:13:09.093 "dhgroup": "ffdhe4096" 00:13:09.093 } 00:13:09.093 } 00:13:09.093 ]' 00:13:09.093 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.351 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.351 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.351 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:09.351 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.351 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.351 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.351 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.608 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:13:10.174 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.432 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:10.432 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.432 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.432 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.432 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.432 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:10.432 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.690 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.949 00:13:11.206 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.206 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.206 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.464 { 00:13:11.464 "cntlid": 127, 00:13:11.464 "qid": 0, 00:13:11.464 "state": "enabled", 00:13:11.464 "thread": "nvmf_tgt_poll_group_000", 00:13:11.464 "listen_address": { 00:13:11.464 "trtype": "TCP", 00:13:11.464 "adrfam": "IPv4", 00:13:11.464 "traddr": "10.0.0.2", 00:13:11.464 "trsvcid": "4420" 00:13:11.464 }, 00:13:11.464 "peer_address": { 00:13:11.464 "trtype": "TCP", 00:13:11.464 "adrfam": "IPv4", 00:13:11.464 "traddr": "10.0.0.1", 00:13:11.464 "trsvcid": "38528" 00:13:11.464 }, 00:13:11.464 "auth": { 00:13:11.464 "state": "completed", 00:13:11.464 "digest": "sha512", 00:13:11.464 "dhgroup": "ffdhe4096" 00:13:11.464 } 00:13:11.464 } 00:13:11.464 ]' 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.464 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.721 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.651 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.262 00:13:13.262 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.262 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.262 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.520 { 00:13:13.520 "cntlid": 129, 00:13:13.520 "qid": 0, 00:13:13.520 "state": "enabled", 00:13:13.520 "thread": "nvmf_tgt_poll_group_000", 00:13:13.520 "listen_address": { 00:13:13.520 "trtype": "TCP", 00:13:13.520 "adrfam": "IPv4", 00:13:13.520 "traddr": "10.0.0.2", 00:13:13.520 "trsvcid": "4420" 00:13:13.520 }, 00:13:13.520 "peer_address": { 00:13:13.520 "trtype": "TCP", 00:13:13.520 "adrfam": "IPv4", 00:13:13.520 "traddr": "10.0.0.1", 00:13:13.520 "trsvcid": "38558" 00:13:13.520 }, 00:13:13.520 "auth": { 00:13:13.520 "state": "completed", 00:13:13.520 "digest": "sha512", 00:13:13.520 "dhgroup": "ffdhe6144" 00:13:13.520 } 00:13:13.520 } 00:13:13.520 ]' 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.520 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.780 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:13:14.345 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.733 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.298 00:13:15.298 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.298 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.298 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.556 { 00:13:15.556 "cntlid": 131, 00:13:15.556 "qid": 0, 00:13:15.556 "state": "enabled", 00:13:15.556 "thread": "nvmf_tgt_poll_group_000", 00:13:15.556 "listen_address": { 00:13:15.556 "trtype": "TCP", 00:13:15.556 "adrfam": "IPv4", 00:13:15.556 "traddr": "10.0.0.2", 00:13:15.556 "trsvcid": "4420" 00:13:15.556 }, 00:13:15.556 "peer_address": { 00:13:15.556 "trtype": "TCP", 00:13:15.556 "adrfam": "IPv4", 00:13:15.556 "traddr": "10.0.0.1", 00:13:15.556 "trsvcid": "38578" 00:13:15.556 }, 00:13:15.556 "auth": { 00:13:15.556 "state": "completed", 00:13:15.556 "digest": "sha512", 00:13:15.556 "dhgroup": "ffdhe6144" 00:13:15.556 } 00:13:15.556 } 00:13:15.556 ]' 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.556 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.814 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:13:16.748 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.748 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:16.748 17:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.748 17:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.748 17:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.748 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.748 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:16.748 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.006 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.263 00:13:17.263 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.263 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.263 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.527 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.527 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.527 17:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.527 17:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.527 17:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.527 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.527 { 00:13:17.527 "cntlid": 133, 00:13:17.527 "qid": 0, 00:13:17.527 "state": "enabled", 00:13:17.527 "thread": "nvmf_tgt_poll_group_000", 00:13:17.527 "listen_address": { 00:13:17.527 "trtype": "TCP", 00:13:17.527 "adrfam": "IPv4", 00:13:17.527 "traddr": "10.0.0.2", 00:13:17.527 "trsvcid": "4420" 00:13:17.527 }, 00:13:17.527 "peer_address": { 00:13:17.527 "trtype": "TCP", 00:13:17.527 "adrfam": "IPv4", 00:13:17.527 "traddr": "10.0.0.1", 00:13:17.527 "trsvcid": "38606" 00:13:17.527 }, 00:13:17.527 "auth": { 00:13:17.527 "state": "completed", 00:13:17.527 "digest": "sha512", 00:13:17.527 "dhgroup": "ffdhe6144" 00:13:17.527 } 00:13:17.527 } 00:13:17.527 ]' 00:13:17.527 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.785 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.785 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.785 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:17.785 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.785 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.785 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.785 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.043 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:13:18.975 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.975 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:18.975 17:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.975 17:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.975 17:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.975 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.975 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:18.975 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:18.975 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:19.539 00:13:19.539 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.539 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.539 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.796 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.796 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.796 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.796 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.796 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.796 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.796 { 00:13:19.796 "cntlid": 135, 00:13:19.796 "qid": 0, 00:13:19.796 "state": "enabled", 00:13:19.796 "thread": "nvmf_tgt_poll_group_000", 00:13:19.796 "listen_address": { 00:13:19.796 "trtype": "TCP", 00:13:19.796 "adrfam": "IPv4", 00:13:19.796 "traddr": "10.0.0.2", 00:13:19.796 "trsvcid": "4420" 00:13:19.796 }, 00:13:19.796 "peer_address": { 00:13:19.796 "trtype": "TCP", 00:13:19.796 "adrfam": "IPv4", 00:13:19.796 "traddr": "10.0.0.1", 00:13:19.796 "trsvcid": "45576" 00:13:19.796 }, 00:13:19.796 "auth": { 00:13:19.796 "state": "completed", 00:13:19.796 "digest": "sha512", 00:13:19.796 "dhgroup": "ffdhe6144" 00:13:19.796 } 00:13:19.796 } 00:13:19.796 ]' 00:13:19.796 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.796 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.796 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.796 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:19.796 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.796 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.796 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.796 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.053 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:13:20.985 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.985 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:20.985 17:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.985 17:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.985 17:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.985 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.985 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.985 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:20.985 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.985 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.930 00:13:21.930 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.930 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.930 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.930 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.930 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.930 17:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.930 17:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.930 17:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.930 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.930 { 00:13:21.930 "cntlid": 137, 00:13:21.930 "qid": 0, 00:13:21.930 "state": "enabled", 00:13:21.930 "thread": "nvmf_tgt_poll_group_000", 00:13:21.930 "listen_address": { 00:13:21.930 "trtype": "TCP", 00:13:21.930 "adrfam": "IPv4", 00:13:21.930 "traddr": "10.0.0.2", 00:13:21.930 "trsvcid": "4420" 00:13:21.930 }, 00:13:21.930 "peer_address": { 00:13:21.930 "trtype": "TCP", 00:13:21.930 "adrfam": "IPv4", 00:13:21.930 "traddr": "10.0.0.1", 00:13:21.930 "trsvcid": "45602" 00:13:21.930 }, 00:13:21.930 "auth": { 00:13:21.930 "state": "completed", 00:13:21.930 "digest": "sha512", 00:13:21.930 "dhgroup": "ffdhe8192" 00:13:21.930 } 00:13:21.930 } 00:13:21.930 ]' 00:13:21.930 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.930 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.930 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.188 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:22.188 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.188 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.188 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.188 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.450 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.382 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.313 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.313 { 00:13:24.313 "cntlid": 139, 00:13:24.313 "qid": 0, 00:13:24.313 "state": "enabled", 00:13:24.313 "thread": "nvmf_tgt_poll_group_000", 00:13:24.313 "listen_address": { 00:13:24.313 "trtype": "TCP", 00:13:24.313 "adrfam": "IPv4", 00:13:24.313 "traddr": "10.0.0.2", 00:13:24.313 "trsvcid": "4420" 00:13:24.313 }, 00:13:24.313 "peer_address": { 00:13:24.313 "trtype": "TCP", 00:13:24.313 "adrfam": "IPv4", 00:13:24.313 "traddr": "10.0.0.1", 00:13:24.313 "trsvcid": "45626" 00:13:24.313 }, 00:13:24.313 "auth": { 00:13:24.313 "state": "completed", 00:13:24.313 "digest": "sha512", 00:13:24.313 "dhgroup": "ffdhe8192" 00:13:24.313 } 00:13:24.313 } 00:13:24.313 ]' 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.313 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.570 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:24.570 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.570 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.570 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.570 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.828 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:01:MjY1ZmJhZGIwMzE3OGYyNDk3NGFiMzQzNDExNDFmZjf1T5Cu: --dhchap-ctrl-secret DHHC-1:02:OWY1ZWI5MjgwNGVkOTAxZTU1OWNmMzk0ZWMwZjBkZWY2ZDBhOGMyOTNkMWViYjE2Ax3/oA==: 00:13:25.392 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.392 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:25.392 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.392 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.392 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.392 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.392 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.392 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.649 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.213 00:13:26.213 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.213 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.213 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.471 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.471 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.471 17:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.471 17:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.471 17:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.471 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.471 { 00:13:26.471 "cntlid": 141, 00:13:26.471 "qid": 0, 00:13:26.471 "state": "enabled", 00:13:26.471 "thread": "nvmf_tgt_poll_group_000", 00:13:26.471 "listen_address": { 00:13:26.471 "trtype": "TCP", 00:13:26.471 "adrfam": "IPv4", 00:13:26.471 "traddr": "10.0.0.2", 00:13:26.471 "trsvcid": "4420" 00:13:26.471 }, 00:13:26.471 "peer_address": { 00:13:26.471 "trtype": "TCP", 00:13:26.471 "adrfam": "IPv4", 00:13:26.471 "traddr": "10.0.0.1", 00:13:26.471 "trsvcid": "45650" 00:13:26.471 }, 00:13:26.471 "auth": { 00:13:26.471 "state": "completed", 00:13:26.471 "digest": "sha512", 00:13:26.471 "dhgroup": "ffdhe8192" 00:13:26.471 } 00:13:26.471 } 00:13:26.471 ]' 00:13:26.471 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.730 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.730 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.730 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:26.730 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.730 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.730 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.730 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.990 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:02:MDYxYTUyMzQxMWQ4NDY3ZjcxZmRiOTBjZmVjZDIwNjlmNzlkNDYzMzQ1ZWU3ZmRiS67TDw==: --dhchap-ctrl-secret DHHC-1:01:ODEyODljNTdjNTdkMGEyZThmZTRlN2MyYzkxOWM4MGI4Gx4s: 00:13:27.943 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.944 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:27.944 17:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.944 17:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.944 17:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.944 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.944 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:27.944 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.944 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:28.527 00:13:28.527 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.527 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.527 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.807 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.807 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.807 17:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.807 17:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.065 17:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.065 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.065 { 00:13:29.065 "cntlid": 143, 00:13:29.065 "qid": 0, 00:13:29.065 "state": "enabled", 00:13:29.065 "thread": "nvmf_tgt_poll_group_000", 00:13:29.065 "listen_address": { 00:13:29.065 "trtype": "TCP", 00:13:29.065 "adrfam": "IPv4", 00:13:29.065 "traddr": "10.0.0.2", 00:13:29.065 "trsvcid": "4420" 00:13:29.065 }, 00:13:29.065 "peer_address": { 00:13:29.065 "trtype": "TCP", 00:13:29.065 "adrfam": "IPv4", 00:13:29.065 "traddr": "10.0.0.1", 00:13:29.065 "trsvcid": "45678" 00:13:29.065 }, 00:13:29.065 "auth": { 00:13:29.065 "state": "completed", 00:13:29.065 "digest": "sha512", 00:13:29.065 "dhgroup": "ffdhe8192" 00:13:29.065 } 00:13:29.065 } 00:13:29.065 ]' 00:13:29.065 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.065 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.065 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.065 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:29.065 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.065 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.065 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.065 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.322 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.253 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.817 00:13:31.075 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.075 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.075 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.075 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.075 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.075 17:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.075 17:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.334 17:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.334 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:31.334 { 00:13:31.334 "cntlid": 145, 00:13:31.335 "qid": 0, 00:13:31.335 "state": "enabled", 00:13:31.335 "thread": "nvmf_tgt_poll_group_000", 00:13:31.335 "listen_address": { 00:13:31.335 "trtype": "TCP", 00:13:31.335 "adrfam": "IPv4", 00:13:31.335 "traddr": "10.0.0.2", 00:13:31.335 "trsvcid": "4420" 00:13:31.335 }, 00:13:31.335 "peer_address": { 00:13:31.335 "trtype": "TCP", 00:13:31.335 "adrfam": "IPv4", 00:13:31.335 "traddr": "10.0.0.1", 00:13:31.335 "trsvcid": "49906" 00:13:31.335 }, 00:13:31.335 "auth": { 00:13:31.335 "state": "completed", 00:13:31.335 "digest": "sha512", 00:13:31.335 "dhgroup": "ffdhe8192" 00:13:31.335 } 00:13:31.335 } 00:13:31.335 ]' 00:13:31.335 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:31.335 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.335 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:31.335 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:31.335 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:31.335 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.335 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.335 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.592 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:00:ZTA5ZmY5MjMyN2Y4ZGQ4NjZkMTE1NzZiNTgxOTYyMmFiMzBlMGU5MWNlOWFkODk1nIOcHw==: --dhchap-ctrl-secret DHHC-1:03:NTUzODYzZWQ1YTdlMjU3MmIyYjY5NzQ4ZGUwMDliMjgxZDNlY2RkMGMwYzJjYmQyYTk0ZWNhNDExNDIwMGM0YiILRi8=: 00:13:32.158 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.415 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:32.415 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:32.416 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:33.008 request: 00:13:33.008 { 00:13:33.008 "name": "nvme0", 00:13:33.008 "trtype": "tcp", 00:13:33.008 "traddr": "10.0.0.2", 00:13:33.008 "adrfam": "ipv4", 00:13:33.008 "trsvcid": "4420", 00:13:33.008 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:33.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da", 00:13:33.008 "prchk_reftag": false, 00:13:33.008 "prchk_guard": false, 00:13:33.008 "hdgst": false, 00:13:33.008 "ddgst": false, 00:13:33.008 "dhchap_key": "key2", 00:13:33.008 "method": "bdev_nvme_attach_controller", 00:13:33.008 "req_id": 1 00:13:33.008 } 00:13:33.008 Got JSON-RPC error response 00:13:33.008 response: 00:13:33.008 { 00:13:33.008 "code": -5, 00:13:33.008 "message": "Input/output error" 00:13:33.008 } 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:33.008 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:33.574 request: 00:13:33.574 { 00:13:33.574 "name": "nvme0", 00:13:33.574 "trtype": "tcp", 00:13:33.574 "traddr": "10.0.0.2", 00:13:33.574 "adrfam": "ipv4", 00:13:33.574 "trsvcid": "4420", 00:13:33.574 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:33.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da", 00:13:33.574 "prchk_reftag": false, 00:13:33.574 "prchk_guard": false, 00:13:33.574 "hdgst": false, 00:13:33.574 "ddgst": false, 00:13:33.574 "dhchap_key": "key1", 00:13:33.574 "dhchap_ctrlr_key": "ckey2", 00:13:33.574 "method": "bdev_nvme_attach_controller", 00:13:33.574 "req_id": 1 00:13:33.574 } 00:13:33.574 Got JSON-RPC error response 00:13:33.574 response: 00:13:33.574 { 00:13:33.574 "code": -5, 00:13:33.574 "message": "Input/output error" 00:13:33.574 } 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key1 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.574 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.141 request: 00:13:34.141 { 00:13:34.141 "name": "nvme0", 00:13:34.141 "trtype": "tcp", 00:13:34.141 "traddr": "10.0.0.2", 00:13:34.141 "adrfam": "ipv4", 00:13:34.141 "trsvcid": "4420", 00:13:34.141 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:34.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da", 00:13:34.141 "prchk_reftag": false, 00:13:34.141 "prchk_guard": false, 00:13:34.141 "hdgst": false, 00:13:34.141 "ddgst": false, 00:13:34.141 "dhchap_key": "key1", 00:13:34.141 "dhchap_ctrlr_key": "ckey1", 00:13:34.141 "method": "bdev_nvme_attach_controller", 00:13:34.141 "req_id": 1 00:13:34.141 } 00:13:34.141 Got JSON-RPC error response 00:13:34.141 response: 00:13:34.141 { 00:13:34.141 "code": -5, 00:13:34.141 "message": "Input/output error" 00:13:34.141 } 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69218 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69218 ']' 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69218 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69218 00:13:34.141 killing process with pid 69218 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69218' 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69218 00:13:34.141 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69218 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72240 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72240 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72240 ']' 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.399 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:34.400 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72240 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72240 ']' 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.783 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.042 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.609 00:13:36.609 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.609 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.609 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.872 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.872 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.872 17:02:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.872 17:02:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.872 17:02:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.872 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.872 { 00:13:36.872 "cntlid": 1, 00:13:36.872 "qid": 0, 00:13:36.872 "state": "enabled", 00:13:36.872 "thread": "nvmf_tgt_poll_group_000", 00:13:36.872 "listen_address": { 00:13:36.872 "trtype": "TCP", 00:13:36.872 "adrfam": "IPv4", 00:13:36.872 "traddr": "10.0.0.2", 00:13:36.872 "trsvcid": "4420" 00:13:36.872 }, 00:13:36.872 "peer_address": { 00:13:36.872 "trtype": "TCP", 00:13:36.872 "adrfam": "IPv4", 00:13:36.872 "traddr": "10.0.0.1", 00:13:36.872 "trsvcid": "49960" 00:13:36.872 }, 00:13:36.872 "auth": { 00:13:36.872 "state": "completed", 00:13:36.872 "digest": "sha512", 00:13:36.872 "dhgroup": "ffdhe8192" 00:13:36.872 } 00:13:36.872 } 00:13:36.872 ]' 00:13:36.872 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.872 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.872 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.135 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:37.135 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:37.135 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.135 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.135 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.393 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid 0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-secret DHHC-1:03:YmM3NDVmZGZiZjExZTdhOWQwYjMyMzdiMzkyNDVhOGFjY2ViYjBmMDYyZWQ1NDBlZDI4YmMyNDhhZmE3NGMyNC2Swmc=: 00:13:37.962 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.962 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:37.962 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.962 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.962 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.962 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --dhchap-key key3 00:13:37.962 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.962 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.962 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.962 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:37.962 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:38.221 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.221 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:38.221 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.221 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:38.221 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:38.221 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:38.221 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:38.221 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.221 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.479 request: 00:13:38.479 { 00:13:38.479 "name": "nvme0", 00:13:38.479 "trtype": "tcp", 00:13:38.479 "traddr": "10.0.0.2", 00:13:38.479 "adrfam": "ipv4", 00:13:38.479 "trsvcid": "4420", 00:13:38.479 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:38.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da", 00:13:38.479 "prchk_reftag": false, 00:13:38.479 "prchk_guard": false, 00:13:38.479 "hdgst": false, 00:13:38.479 "ddgst": false, 00:13:38.479 "dhchap_key": "key3", 00:13:38.479 "method": "bdev_nvme_attach_controller", 00:13:38.479 "req_id": 1 00:13:38.479 } 00:13:38.479 Got JSON-RPC error response 00:13:38.479 response: 00:13:38.479 { 00:13:38.479 "code": -5, 00:13:38.479 "message": "Input/output error" 00:13:38.479 } 00:13:38.479 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:38.479 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:38.479 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:38.479 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:38.479 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:38.479 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:38.480 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:38.480 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:38.738 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.738 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:38.738 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.738 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:38.738 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:38.738 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:38.738 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:38.738 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.738 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:39.007 request: 00:13:39.008 { 00:13:39.008 "name": "nvme0", 00:13:39.008 "trtype": "tcp", 00:13:39.008 "traddr": "10.0.0.2", 00:13:39.008 "adrfam": "ipv4", 00:13:39.008 "trsvcid": "4420", 00:13:39.008 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:39.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da", 00:13:39.008 "prchk_reftag": false, 00:13:39.008 "prchk_guard": false, 00:13:39.008 "hdgst": false, 00:13:39.008 "ddgst": false, 00:13:39.008 "dhchap_key": "key3", 00:13:39.008 "method": "bdev_nvme_attach_controller", 00:13:39.008 "req_id": 1 00:13:39.008 } 00:13:39.008 Got JSON-RPC error response 00:13:39.008 response: 00:13:39.008 { 00:13:39.008 "code": -5, 00:13:39.008 "message": "Input/output error" 00:13:39.008 } 00:13:39.008 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:39.008 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:39.008 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:39.008 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:39.008 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:39.008 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:39.008 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:39.008 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:39.008 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:39.008 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:39.267 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:39.525 request: 00:13:39.525 { 00:13:39.525 "name": "nvme0", 00:13:39.525 "trtype": "tcp", 00:13:39.525 "traddr": "10.0.0.2", 00:13:39.525 "adrfam": "ipv4", 00:13:39.525 "trsvcid": "4420", 00:13:39.525 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:39.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da", 00:13:39.525 "prchk_reftag": false, 00:13:39.525 "prchk_guard": false, 00:13:39.525 "hdgst": false, 00:13:39.525 "ddgst": false, 00:13:39.525 "dhchap_key": "key0", 00:13:39.525 "dhchap_ctrlr_key": "key1", 00:13:39.525 "method": "bdev_nvme_attach_controller", 00:13:39.525 "req_id": 1 00:13:39.525 } 00:13:39.525 Got JSON-RPC error response 00:13:39.525 response: 00:13:39.525 { 00:13:39.525 "code": -5, 00:13:39.525 "message": "Input/output error" 00:13:39.525 } 00:13:39.525 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:39.525 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:39.525 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:39.525 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:39.525 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:39.525 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:39.784 00:13:39.784 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:39.784 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:39.784 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.041 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.041 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.041 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69250 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69250 ']' 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69250 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69250 00:13:40.607 killing process with pid 69250 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69250' 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69250 00:13:40.607 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69250 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.866 rmmod nvme_tcp 00:13:40.866 rmmod nvme_fabrics 00:13:40.866 rmmod nvme_keyring 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72240 ']' 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72240 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72240 ']' 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72240 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72240 00:13:40.866 killing process with pid 72240 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72240' 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72240 00:13:40.866 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72240 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.BUO /tmp/spdk.key-sha256.5bY /tmp/spdk.key-sha384.sOE /tmp/spdk.key-sha512.yj4 /tmp/spdk.key-sha512.nvz /tmp/spdk.key-sha384.omZ /tmp/spdk.key-sha256.jND '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:41.139 ************************************ 00:13:41.139 END TEST nvmf_auth_target 00:13:41.139 ************************************ 00:13:41.139 00:13:41.139 real 2m48.007s 00:13:41.139 user 6m42.288s 00:13:41.139 sys 0m26.072s 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:41.139 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.418 17:02:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:41.418 17:02:31 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:41.418 17:02:31 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:41.418 17:02:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:41.418 17:02:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.418 17:02:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.418 ************************************ 00:13:41.418 START TEST nvmf_bdevio_no_huge 00:13:41.418 ************************************ 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:41.418 * Looking for test storage... 00:13:41.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:41.418 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:41.419 Cannot find device "nvmf_tgt_br" 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:41.419 Cannot find device "nvmf_tgt_br2" 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:41.419 Cannot find device "nvmf_tgt_br" 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:41.419 Cannot find device "nvmf_tgt_br2" 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:41.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:41.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:41.419 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:41.678 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:41.678 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:41.678 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:41.678 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:41.678 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:41.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:13:41.679 00:13:41.679 --- 10.0.0.2 ping statistics --- 00:13:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.679 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:41.679 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:41.679 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:13:41.679 00:13:41.679 --- 10.0.0.3 ping statistics --- 00:13:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.679 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:41.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:41.679 00:13:41.679 --- 10.0.0.1 ping statistics --- 00:13:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.679 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:41.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72549 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72549 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72549 ']' 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.679 17:02:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:41.937 [2024-07-15 17:02:31.999722] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:41.937 [2024-07-15 17:02:31.999849] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:41.937 [2024-07-15 17:02:32.160734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.195 [2024-07-15 17:02:32.282607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.195 [2024-07-15 17:02:32.283054] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.195 [2024-07-15 17:02:32.283536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.195 [2024-07-15 17:02:32.283979] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.195 [2024-07-15 17:02:32.284186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.195 [2024-07-15 17:02:32.284539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:42.195 [2024-07-15 17:02:32.284682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:42.195 [2024-07-15 17:02:32.284813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:42.195 [2024-07-15 17:02:32.284815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.195 [2024-07-15 17:02:32.289517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:42.764 17:02:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.764 17:02:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:13:42.764 17:02:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.764 17:02:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:42.764 17:02:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:42.764 17:02:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.764 17:02:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.764 17:02:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.764 17:02:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:42.764 [2024-07-15 17:02:32.992897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:42.764 Malloc0 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:42.764 [2024-07-15 17:02:33.037085] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:42.764 { 00:13:42.764 "params": { 00:13:42.764 "name": "Nvme$subsystem", 00:13:42.764 "trtype": "$TEST_TRANSPORT", 00:13:42.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.764 "adrfam": "ipv4", 00:13:42.764 "trsvcid": "$NVMF_PORT", 00:13:42.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.764 "hdgst": ${hdgst:-false}, 00:13:42.764 "ddgst": ${ddgst:-false} 00:13:42.764 }, 00:13:42.764 "method": "bdev_nvme_attach_controller" 00:13:42.764 } 00:13:42.764 EOF 00:13:42.764 )") 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:42.764 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:42.764 "params": { 00:13:42.764 "name": "Nvme1", 00:13:42.764 "trtype": "tcp", 00:13:42.764 "traddr": "10.0.0.2", 00:13:42.764 "adrfam": "ipv4", 00:13:42.764 "trsvcid": "4420", 00:13:42.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.764 "hdgst": false, 00:13:42.764 "ddgst": false 00:13:42.764 }, 00:13:42.764 "method": "bdev_nvme_attach_controller" 00:13:42.764 }' 00:13:43.023 [2024-07-15 17:02:33.094513] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:43.023 [2024-07-15 17:02:33.094615] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72585 ] 00:13:43.023 [2024-07-15 17:02:33.241628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.293 [2024-07-15 17:02:33.385752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.293 [2024-07-15 17:02:33.385901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.293 [2024-07-15 17:02:33.385907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.293 [2024-07-15 17:02:33.400718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:43.293 I/O targets: 00:13:43.293 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:43.293 00:13:43.293 00:13:43.293 CUnit - A unit testing framework for C - Version 2.1-3 00:13:43.293 http://cunit.sourceforge.net/ 00:13:43.293 00:13:43.293 00:13:43.293 Suite: bdevio tests on: Nvme1n1 00:13:43.293 Test: blockdev write read block ...passed 00:13:43.293 Test: blockdev write zeroes read block ...passed 00:13:43.293 Test: blockdev write zeroes read no split ...passed 00:13:43.293 Test: blockdev write zeroes read split ...passed 00:13:43.556 Test: blockdev write zeroes read split partial ...passed 00:13:43.556 Test: blockdev reset ...[2024-07-15 17:02:33.596315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:43.556 [2024-07-15 17:02:33.596433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e1870 (9): Bad file descriptor 00:13:43.556 passed 00:13:43.556 Test: blockdev write read 8 blocks ...[2024-07-15 17:02:33.614991] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:43.556 passed 00:13:43.556 Test: blockdev write read size > 128k ...passed 00:13:43.556 Test: blockdev write read invalid size ...passed 00:13:43.556 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.556 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.556 Test: blockdev write read max offset ...passed 00:13:43.556 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.556 Test: blockdev writev readv 8 blocks ...passed 00:13:43.556 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.556 Test: blockdev writev readv block ...passed 00:13:43.556 Test: blockdev writev readv size > 128k ...passed 00:13:43.556 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.556 Test: blockdev comparev and writev ...[2024-07-15 17:02:33.623551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.556 [2024-07-15 17:02:33.623600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:43.556 [2024-07-15 17:02:33.623626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.556 [2024-07-15 17:02:33.623646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:43.556 [2024-07-15 17:02:33.623951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.556 [2024-07-15 17:02:33.623972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:43.556 [2024-07-15 17:02:33.623993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.556 [2024-07-15 17:02:33.624005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:43.556 [2024-07-15 17:02:33.624305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.556 [2024-07-15 17:02:33.624325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:43.556 [2024-07-15 17:02:33.624345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.556 [2024-07-15 17:02:33.624373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:43.556 [2024-07-15 17:02:33.624671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.556 [2024-07-15 17:02:33.624703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:43.556 [2024-07-15 17:02:33.624725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.556 [2024-07-15 17:02:33.624745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:43.556 passed 00:13:43.556 Test: blockdev nvme passthru rw ...passed 00:13:43.556 Test: blockdev nvme passthru vendor specific ...[2024-07-15 17:02:33.625549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:43.556 [2024-07-15 17:02:33.625581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:43.556 [2024-07-15 17:02:33.625711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:43.556 [2024-07-15 17:02:33.625730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:43.556 passed 00:13:43.556 Test: blockdev nvme admin passthru ...[2024-07-15 17:02:33.625833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:43.556 [2024-07-15 17:02:33.625859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:43.556 [2024-07-15 17:02:33.625977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:43.556 [2024-07-15 17:02:33.625996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:43.556 passed 00:13:43.556 Test: blockdev copy ...passed 00:13:43.556 00:13:43.556 Run Summary: Type Total Ran Passed Failed Inactive 00:13:43.556 suites 1 1 n/a 0 0 00:13:43.556 tests 23 23 23 0 0 00:13:43.556 asserts 152 152 152 0 n/a 00:13:43.556 00:13:43.556 Elapsed time = 0.159 seconds 00:13:43.815 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.815 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.815 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:43.815 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.815 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:43.815 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:43.815 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.815 17:02:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.815 rmmod nvme_tcp 00:13:43.815 rmmod nvme_fabrics 00:13:43.815 rmmod nvme_keyring 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72549 ']' 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72549 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72549 ']' 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72549 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72549 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:43.815 killing process with pid 72549 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72549' 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72549 00:13:43.815 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72549 00:13:44.383 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.383 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.383 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.383 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.383 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.383 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.383 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.383 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.383 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:44.383 00:13:44.383 real 0m3.069s 00:13:44.383 user 0m9.948s 00:13:44.383 sys 0m1.186s 00:13:44.383 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.383 17:02:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:44.383 ************************************ 00:13:44.383 END TEST nvmf_bdevio_no_huge 00:13:44.383 ************************************ 00:13:44.383 17:02:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:44.383 17:02:34 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:44.383 17:02:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:44.383 17:02:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.383 17:02:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:44.383 ************************************ 00:13:44.383 START TEST nvmf_tls 00:13:44.383 ************************************ 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:44.383 * Looking for test storage... 00:13:44.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.383 17:02:34 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.642 17:02:34 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.642 17:02:34 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:44.643 Cannot find device "nvmf_tgt_br" 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.643 Cannot find device "nvmf_tgt_br2" 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:44.643 Cannot find device "nvmf_tgt_br" 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:44.643 Cannot find device "nvmf_tgt_br2" 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:44.643 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:44.903 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:44.903 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:44.903 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:44.903 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.903 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.903 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.903 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.903 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.903 17:02:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:44.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:13:44.903 00:13:44.903 --- 10.0.0.2 ping statistics --- 00:13:44.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.903 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:44.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:13:44.903 00:13:44.903 --- 10.0.0.3 ping statistics --- 00:13:44.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.903 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:44.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:13:44.903 00:13:44.903 --- 10.0.0.1 ping statistics --- 00:13:44.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.903 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72770 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72770 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72770 ']' 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.903 17:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.903 [2024-07-15 17:02:35.086715] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:44.904 [2024-07-15 17:02:35.087057] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.163 [2024-07-15 17:02:35.223917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.163 [2024-07-15 17:02:35.341534] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.163 [2024-07-15 17:02:35.341586] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.163 [2024-07-15 17:02:35.341599] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.163 [2024-07-15 17:02:35.341607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.163 [2024-07-15 17:02:35.341615] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.163 [2024-07-15 17:02:35.341646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.099 17:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.099 17:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:46.099 17:02:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.099 17:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:46.099 17:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.099 17:02:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.099 17:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:46.099 17:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:46.099 true 00:13:46.099 17:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:46.099 17:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:46.357 17:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:46.357 17:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:46.358 17:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:46.618 17:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:46.618 17:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:46.877 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:46.877 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:46.877 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:47.135 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:47.135 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:47.393 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:47.393 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:47.393 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:47.393 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:47.651 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:47.651 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:47.651 17:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:47.911 17:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:47.911 17:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:48.478 17:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:48.478 17:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:48.478 17:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:48.478 17:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:48.478 17:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:48.737 17:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:48.737 17:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:48.737 17:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:48.737 17:02:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:48.737 17:02:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:48.737 17:02:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:48.737 17:02:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:48.737 17:02:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:48.737 17:02:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:48.737 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:48.737 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:48.737 17:02:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:48.737 17:02:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:48.737 17:02:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:48.737 17:02:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:48.737 17:02:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:48.737 17:02:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:48.996 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:48.996 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:48.996 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.4gILJCvSLX 00:13:48.996 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:48.996 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.FpPoQhgEos 00:13:48.996 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:48.996 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:48.996 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.4gILJCvSLX 00:13:48.996 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.FpPoQhgEos 00:13:48.996 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:49.254 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:49.513 [2024-07-15 17:02:39.630572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:49.513 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.4gILJCvSLX 00:13:49.513 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.4gILJCvSLX 00:13:49.513 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:49.772 [2024-07-15 17:02:39.896778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.772 17:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:50.044 17:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:50.302 [2024-07-15 17:02:40.364883] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:50.302 [2024-07-15 17:02:40.365094] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.302 17:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:50.560 malloc0 00:13:50.560 17:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:50.560 17:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4gILJCvSLX 00:13:50.818 [2024-07-15 17:02:41.061183] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:50.818 17:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.4gILJCvSLX 00:14:03.026 Initializing NVMe Controllers 00:14:03.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:03.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:03.026 Initialization complete. Launching workers. 00:14:03.026 ======================================================== 00:14:03.026 Latency(us) 00:14:03.026 Device Information : IOPS MiB/s Average min max 00:14:03.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9553.45 37.32 6700.69 1579.50 7875.83 00:14:03.026 ======================================================== 00:14:03.026 Total : 9553.45 37.32 6700.69 1579.50 7875.83 00:14:03.026 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4gILJCvSLX 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4gILJCvSLX' 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73002 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73002 /var/tmp/bdevperf.sock 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73002 ']' 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.026 17:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.027 17:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.027 17:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.027 17:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.027 [2024-07-15 17:02:51.326082] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:03.027 [2024-07-15 17:02:51.326468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73002 ] 00:14:03.027 [2024-07-15 17:02:51.461145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.027 [2024-07-15 17:02:51.610376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.027 [2024-07-15 17:02:51.666018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:03.027 17:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.027 17:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:03.027 17:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4gILJCvSLX 00:14:03.027 [2024-07-15 17:02:52.566309] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:03.027 [2024-07-15 17:02:52.566488] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:03.027 TLSTESTn1 00:14:03.027 17:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:03.027 Running I/O for 10 seconds... 00:14:13.008 00:14:13.008 Latency(us) 00:14:13.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.008 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:13.008 Verification LBA range: start 0x0 length 0x2000 00:14:13.008 TLSTESTn1 : 10.03 3967.08 15.50 0.00 0.00 32196.86 7119.59 19660.80 00:14:13.008 =================================================================================================================== 00:14:13.008 Total : 3967.08 15.50 0.00 0.00 32196.86 7119.59 19660.80 00:14:13.008 0 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73002 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73002 ']' 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73002 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73002 00:14:13.008 killing process with pid 73002 00:14:13.008 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.008 00:14:13.008 Latency(us) 00:14:13.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.008 =================================================================================================================== 00:14:13.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73002' 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73002 00:14:13.008 [2024-07-15 17:03:02.828330] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:13.008 17:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73002 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FpPoQhgEos 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FpPoQhgEos 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FpPoQhgEos 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FpPoQhgEos' 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73136 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:13.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73136 /var/tmp/bdevperf.sock 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73136 ']' 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.008 17:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.008 [2024-07-15 17:03:03.125075] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:13.008 [2024-07-15 17:03:03.125179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73136 ] 00:14:13.008 [2024-07-15 17:03:03.266635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.266 [2024-07-15 17:03:03.378992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.266 [2024-07-15 17:03:03.432106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:13.833 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.833 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:13.833 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FpPoQhgEos 00:14:14.090 [2024-07-15 17:03:04.348929] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:14.090 [2024-07-15 17:03:04.349065] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:14.091 [2024-07-15 17:03:04.354856] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:14.091 [2024-07-15 17:03:04.355452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7631f0 (107): Transport endpoint is not connected 00:14:14.091 [2024-07-15 17:03:04.356441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7631f0 (9): Bad file descriptor 00:14:14.091 [2024-07-15 17:03:04.357436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:14.091 [2024-07-15 17:03:04.357463] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:14.091 [2024-07-15 17:03:04.357478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:14.091 request: 00:14:14.091 { 00:14:14.091 "name": "TLSTEST", 00:14:14.091 "trtype": "tcp", 00:14:14.091 "traddr": "10.0.0.2", 00:14:14.091 "adrfam": "ipv4", 00:14:14.091 "trsvcid": "4420", 00:14:14.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:14.091 "prchk_reftag": false, 00:14:14.091 "prchk_guard": false, 00:14:14.091 "hdgst": false, 00:14:14.091 "ddgst": false, 00:14:14.091 "psk": "/tmp/tmp.FpPoQhgEos", 00:14:14.091 "method": "bdev_nvme_attach_controller", 00:14:14.091 "req_id": 1 00:14:14.091 } 00:14:14.091 Got JSON-RPC error response 00:14:14.091 response: 00:14:14.091 { 00:14:14.091 "code": -5, 00:14:14.091 "message": "Input/output error" 00:14:14.091 } 00:14:14.091 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73136 00:14:14.091 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73136 ']' 00:14:14.091 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73136 00:14:14.091 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:14.091 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73136 00:14:14.349 killing process with pid 73136 00:14:14.349 Received shutdown signal, test time was about 10.000000 seconds 00:14:14.349 00:14:14.349 Latency(us) 00:14:14.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.349 =================================================================================================================== 00:14:14.349 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73136' 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73136 00:14:14.349 [2024-07-15 17:03:04.409844] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73136 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.4gILJCvSLX 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.4gILJCvSLX 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.4gILJCvSLX 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4gILJCvSLX' 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73163 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73163 /var/tmp/bdevperf.sock 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73163 ']' 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.349 17:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 [2024-07-15 17:03:04.673062] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:14.606 [2024-07-15 17:03:04.673789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73163 ] 00:14:14.606 [2024-07-15 17:03:04.804649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.863 [2024-07-15 17:03:04.916190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.863 [2024-07-15 17:03:04.970889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:15.428 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.428 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:15.428 17:03:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.4gILJCvSLX 00:14:15.686 [2024-07-15 17:03:05.884205] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.686 [2024-07-15 17:03:05.884601] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:15.686 [2024-07-15 17:03:05.892612] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:15.686 [2024-07-15 17:03:05.892830] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:15.686 [2024-07-15 17:03:05.893024] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:15.686 [2024-07-15 17:03:05.893388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c31f0 (107): Transport endpoint is not connected 00:14:15.686 [2024-07-15 17:03:05.894373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c31f0 (9): Bad file descriptor 00:14:15.686 [2024-07-15 17:03:05.895393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:15.686 [2024-07-15 17:03:05.895576] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:15.686 [2024-07-15 17:03:05.895689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:15.686 request: 00:14:15.686 { 00:14:15.686 "name": "TLSTEST", 00:14:15.686 "trtype": "tcp", 00:14:15.686 "traddr": "10.0.0.2", 00:14:15.686 "adrfam": "ipv4", 00:14:15.686 "trsvcid": "4420", 00:14:15.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.686 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:15.686 "prchk_reftag": false, 00:14:15.686 "prchk_guard": false, 00:14:15.686 "hdgst": false, 00:14:15.686 "ddgst": false, 00:14:15.686 "psk": "/tmp/tmp.4gILJCvSLX", 00:14:15.686 "method": "bdev_nvme_attach_controller", 00:14:15.686 "req_id": 1 00:14:15.686 } 00:14:15.686 Got JSON-RPC error response 00:14:15.686 response: 00:14:15.686 { 00:14:15.686 "code": -5, 00:14:15.686 "message": "Input/output error" 00:14:15.686 } 00:14:15.686 17:03:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73163 00:14:15.686 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73163 ']' 00:14:15.686 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73163 00:14:15.686 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:15.686 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.686 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73163 00:14:15.686 killing process with pid 73163 00:14:15.686 Received shutdown signal, test time was about 10.000000 seconds 00:14:15.686 00:14:15.686 Latency(us) 00:14:15.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.686 =================================================================================================================== 00:14:15.686 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:15.686 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:15.686 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:15.686 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73163' 00:14:15.686 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73163 00:14:15.686 17:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73163 00:14:15.686 [2024-07-15 17:03:05.944017] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.4gILJCvSLX 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.4gILJCvSLX 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.4gILJCvSLX 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4gILJCvSLX' 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73191 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73191 /var/tmp/bdevperf.sock 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73191 ']' 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:15.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.995 17:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.995 [2024-07-15 17:03:06.217358] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:15.995 [2024-07-15 17:03:06.217681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73191 ] 00:14:16.253 [2024-07-15 17:03:06.355196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.253 [2024-07-15 17:03:06.466625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.253 [2024-07-15 17:03:06.520394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:17.189 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.189 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:17.189 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4gILJCvSLX 00:14:17.189 [2024-07-15 17:03:07.442600] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:17.189 [2024-07-15 17:03:07.442743] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:17.189 [2024-07-15 17:03:07.452792] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:17.189 [2024-07-15 17:03:07.452833] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:17.189 [2024-07-15 17:03:07.452901] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:17.189 [2024-07-15 17:03:07.453269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dee1f0 (107): Transport endpoint is not connected 00:14:17.189 [2024-07-15 17:03:07.454260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dee1f0 (9): Bad file descriptor 00:14:17.189 [2024-07-15 17:03:07.455257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:17.189 [2024-07-15 17:03:07.455282] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:17.189 [2024-07-15 17:03:07.455297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:17.189 request: 00:14:17.189 { 00:14:17.189 "name": "TLSTEST", 00:14:17.189 "trtype": "tcp", 00:14:17.189 "traddr": "10.0.0.2", 00:14:17.189 "adrfam": "ipv4", 00:14:17.189 "trsvcid": "4420", 00:14:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:17.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:17.189 "prchk_reftag": false, 00:14:17.189 "prchk_guard": false, 00:14:17.189 "hdgst": false, 00:14:17.189 "ddgst": false, 00:14:17.189 "psk": "/tmp/tmp.4gILJCvSLX", 00:14:17.189 "method": "bdev_nvme_attach_controller", 00:14:17.189 "req_id": 1 00:14:17.189 } 00:14:17.189 Got JSON-RPC error response 00:14:17.189 response: 00:14:17.189 { 00:14:17.189 "code": -5, 00:14:17.189 "message": "Input/output error" 00:14:17.189 } 00:14:17.189 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73191 00:14:17.189 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73191 ']' 00:14:17.189 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73191 00:14:17.189 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:17.189 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.189 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73191 00:14:17.448 killing process with pid 73191 00:14:17.448 Received shutdown signal, test time was about 10.000000 seconds 00:14:17.448 00:14:17.448 Latency(us) 00:14:17.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.448 =================================================================================================================== 00:14:17.448 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73191' 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73191 00:14:17.448 [2024-07-15 17:03:07.502624] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73191 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:17.448 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:17.705 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.705 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:17.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.705 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.705 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73217 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73217 /var/tmp/bdevperf.sock 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73217 ']' 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.706 17:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.706 [2024-07-15 17:03:07.794151] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:17.706 [2024-07-15 17:03:07.794232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73217 ] 00:14:17.706 [2024-07-15 17:03:07.923636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.964 [2024-07-15 17:03:08.035880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.964 [2024-07-15 17:03:08.090476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:18.531 17:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.531 17:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:18.531 17:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:18.798 [2024-07-15 17:03:08.993089] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:18.798 [2024-07-15 17:03:08.994572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae8c00 (9): Bad file descriptor 00:14:18.798 [2024-07-15 17:03:08.995569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:18.798 [2024-07-15 17:03:08.995732] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:18.798 [2024-07-15 17:03:08.995851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:18.798 request: 00:14:18.798 { 00:14:18.798 "name": "TLSTEST", 00:14:18.798 "trtype": "tcp", 00:14:18.798 "traddr": "10.0.0.2", 00:14:18.798 "adrfam": "ipv4", 00:14:18.798 "trsvcid": "4420", 00:14:18.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:18.798 "prchk_reftag": false, 00:14:18.798 "prchk_guard": false, 00:14:18.798 "hdgst": false, 00:14:18.798 "ddgst": false, 00:14:18.799 "method": "bdev_nvme_attach_controller", 00:14:18.799 "req_id": 1 00:14:18.799 } 00:14:18.799 Got JSON-RPC error response 00:14:18.799 response: 00:14:18.799 { 00:14:18.799 "code": -5, 00:14:18.799 "message": "Input/output error" 00:14:18.799 } 00:14:18.799 17:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73217 00:14:18.799 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73217 ']' 00:14:18.799 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73217 00:14:18.799 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:18.799 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.799 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73217 00:14:18.799 killing process with pid 73217 00:14:18.799 Received shutdown signal, test time was about 10.000000 seconds 00:14:18.799 00:14:18.799 Latency(us) 00:14:18.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.799 =================================================================================================================== 00:14:18.799 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:18.799 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:18.799 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:18.799 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73217' 00:14:18.799 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73217 00:14:18.799 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73217 00:14:19.118 17:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:19.118 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72770 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72770 ']' 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72770 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72770 00:14:19.119 killing process with pid 72770 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72770' 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72770 00:14:19.119 [2024-07-15 17:03:09.277697] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:19.119 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72770 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.XgfR7rpDIq 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.XgfR7rpDIq 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73256 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73256 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73256 ']' 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.378 17:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.378 [2024-07-15 17:03:09.621714] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:19.378 [2024-07-15 17:03:09.622457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.638 [2024-07-15 17:03:09.759240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.638 [2024-07-15 17:03:09.871072] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.638 [2024-07-15 17:03:09.871125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.638 [2024-07-15 17:03:09.871153] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.638 [2024-07-15 17:03:09.871161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.638 [2024-07-15 17:03:09.871168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.638 [2024-07-15 17:03:09.871192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.638 [2024-07-15 17:03:09.927334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:20.570 17:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.570 17:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:20.570 17:03:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:20.570 17:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:20.570 17:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.570 17:03:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.570 17:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.XgfR7rpDIq 00:14:20.570 17:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XgfR7rpDIq 00:14:20.570 17:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:20.828 [2024-07-15 17:03:10.867695] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.828 17:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:21.086 17:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:21.087 [2024-07-15 17:03:11.359789] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:21.087 [2024-07-15 17:03:11.360063] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.087 17:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:21.362 malloc0 00:14:21.621 17:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:21.621 17:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XgfR7rpDIq 00:14:21.879 [2024-07-15 17:03:12.139278] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:21.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XgfR7rpDIq 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XgfR7rpDIq' 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73311 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73311 /var/tmp/bdevperf.sock 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73311 ']' 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.879 17:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.137 [2024-07-15 17:03:12.198439] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:22.137 [2024-07-15 17:03:12.198532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73311 ] 00:14:22.137 [2024-07-15 17:03:12.333856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.395 [2024-07-15 17:03:12.463157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.395 [2024-07-15 17:03:12.520511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:22.983 17:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.983 17:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:22.983 17:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XgfR7rpDIq 00:14:23.240 [2024-07-15 17:03:13.454811] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.240 [2024-07-15 17:03:13.455123] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:23.240 TLSTESTn1 00:14:23.497 17:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:23.497 Running I/O for 10 seconds... 00:14:33.528 00:14:33.528 Latency(us) 00:14:33.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.528 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:33.528 Verification LBA range: start 0x0 length 0x2000 00:14:33.528 TLSTESTn1 : 10.02 3949.38 15.43 0.00 0.00 32340.28 1809.69 20852.36 00:14:33.528 =================================================================================================================== 00:14:33.528 Total : 3949.38 15.43 0.00 0.00 32340.28 1809.69 20852.36 00:14:33.528 0 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73311 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73311 ']' 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73311 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73311 00:14:33.528 killing process with pid 73311 00:14:33.528 Received shutdown signal, test time was about 10.000000 seconds 00:14:33.528 00:14:33.528 Latency(us) 00:14:33.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.528 =================================================================================================================== 00:14:33.528 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73311' 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73311 00:14:33.528 [2024-07-15 17:03:23.727233] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:33.528 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73311 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.XgfR7rpDIq 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XgfR7rpDIq 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XgfR7rpDIq 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:33.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XgfR7rpDIq 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XgfR7rpDIq' 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73446 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73446 /var/tmp/bdevperf.sock 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73446 ']' 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.786 17:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.786 [2024-07-15 17:03:23.998508] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:33.786 [2024-07-15 17:03:23.998764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73446 ] 00:14:34.044 [2024-07-15 17:03:24.134064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.044 [2024-07-15 17:03:24.242659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.044 [2024-07-15 17:03:24.296010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:34.979 17:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.979 17:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:34.979 17:03:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XgfR7rpDIq 00:14:34.979 [2024-07-15 17:03:25.127844] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:34.979 [2024-07-15 17:03:25.128157] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:34.979 [2024-07-15 17:03:25.128273] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.XgfR7rpDIq 00:14:34.979 request: 00:14:34.979 { 00:14:34.979 "name": "TLSTEST", 00:14:34.979 "trtype": "tcp", 00:14:34.979 "traddr": "10.0.0.2", 00:14:34.979 "adrfam": "ipv4", 00:14:34.979 "trsvcid": "4420", 00:14:34.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:34.979 "prchk_reftag": false, 00:14:34.979 "prchk_guard": false, 00:14:34.979 "hdgst": false, 00:14:34.979 "ddgst": false, 00:14:34.979 "psk": "/tmp/tmp.XgfR7rpDIq", 00:14:34.979 "method": "bdev_nvme_attach_controller", 00:14:34.979 "req_id": 1 00:14:34.979 } 00:14:34.979 Got JSON-RPC error response 00:14:34.979 response: 00:14:34.979 { 00:14:34.979 "code": -1, 00:14:34.979 "message": "Operation not permitted" 00:14:34.979 } 00:14:34.979 17:03:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73446 00:14:34.979 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73446 ']' 00:14:34.979 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73446 00:14:34.979 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:34.979 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.979 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73446 00:14:34.979 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:34.979 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:34.979 killing process with pid 73446 00:14:34.979 Received shutdown signal, test time was about 10.000000 seconds 00:14:34.979 00:14:34.979 Latency(us) 00:14:34.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.979 =================================================================================================================== 00:14:34.979 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:34.979 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73446' 00:14:34.979 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73446 00:14:34.979 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73446 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73256 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73256 ']' 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73256 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73256 00:14:35.237 killing process with pid 73256 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73256' 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73256 00:14:35.237 [2024-07-15 17:03:25.424558] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:35.237 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73256 00:14:35.495 17:03:25 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73481 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73481 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73481 ']' 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.496 17:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.496 [2024-07-15 17:03:25.718274] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:35.496 [2024-07-15 17:03:25.718704] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.754 [2024-07-15 17:03:25.856473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.754 [2024-07-15 17:03:25.970524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.754 [2024-07-15 17:03:25.970583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.754 [2024-07-15 17:03:25.970611] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.754 [2024-07-15 17:03:25.970619] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.754 [2024-07-15 17:03:25.970626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.754 [2024-07-15 17:03:25.970659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.754 [2024-07-15 17:03:26.026227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.XgfR7rpDIq 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.XgfR7rpDIq 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.XgfR7rpDIq 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XgfR7rpDIq 00:14:36.690 17:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:36.690 [2024-07-15 17:03:26.974818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.948 17:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:36.948 17:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:37.206 [2024-07-15 17:03:27.434918] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:37.206 [2024-07-15 17:03:27.435130] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.206 17:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:37.463 malloc0 00:14:37.463 17:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:37.721 17:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XgfR7rpDIq 00:14:37.978 [2024-07-15 17:03:28.198474] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:37.978 [2024-07-15 17:03:28.198530] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:37.978 [2024-07-15 17:03:28.198565] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:37.978 request: 00:14:37.978 { 00:14:37.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.978 "host": "nqn.2016-06.io.spdk:host1", 00:14:37.978 "psk": "/tmp/tmp.XgfR7rpDIq", 00:14:37.978 "method": "nvmf_subsystem_add_host", 00:14:37.978 "req_id": 1 00:14:37.978 } 00:14:37.978 Got JSON-RPC error response 00:14:37.978 response: 00:14:37.978 { 00:14:37.978 "code": -32603, 00:14:37.978 "message": "Internal error" 00:14:37.978 } 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73481 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73481 ']' 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73481 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73481 00:14:37.978 killing process with pid 73481 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73481' 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73481 00:14:37.978 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73481 00:14:38.235 17:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.XgfR7rpDIq 00:14:38.235 17:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:38.235 17:03:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.235 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.235 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.235 17:03:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73548 00:14:38.235 17:03:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:38.235 17:03:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73548 00:14:38.236 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73548 ']' 00:14:38.236 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.236 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.236 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.236 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.236 17:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.236 [2024-07-15 17:03:28.531147] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:38.236 [2024-07-15 17:03:28.531224] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.492 [2024-07-15 17:03:28.665318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.492 [2024-07-15 17:03:28.769803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.492 [2024-07-15 17:03:28.769855] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.492 [2024-07-15 17:03:28.769866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.492 [2024-07-15 17:03:28.769874] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.492 [2024-07-15 17:03:28.769881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.492 [2024-07-15 17:03:28.769913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.749 [2024-07-15 17:03:28.824555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:39.315 17:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.315 17:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:39.315 17:03:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.315 17:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.315 17:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.315 17:03:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.315 17:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.XgfR7rpDIq 00:14:39.315 17:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XgfR7rpDIq 00:14:39.315 17:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:39.573 [2024-07-15 17:03:29.795669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.573 17:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:39.831 17:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:40.089 [2024-07-15 17:03:30.307770] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:40.089 [2024-07-15 17:03:30.308041] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.089 17:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:40.345 malloc0 00:14:40.345 17:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:40.603 17:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XgfR7rpDIq 00:14:40.862 [2024-07-15 17:03:31.035450] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:40.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.862 17:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73598 00:14:40.862 17:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:40.862 17:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.862 17:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73598 /var/tmp/bdevperf.sock 00:14:40.862 17:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73598 ']' 00:14:40.862 17:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.862 17:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.862 17:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.862 17:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.862 17:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.862 [2024-07-15 17:03:31.123215] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:40.862 [2024-07-15 17:03:31.123649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73598 ] 00:14:41.119 [2024-07-15 17:03:31.267636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.119 [2024-07-15 17:03:31.377522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.376 [2024-07-15 17:03:31.430600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:41.941 17:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.941 17:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:41.941 17:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XgfR7rpDIq 00:14:42.199 [2024-07-15 17:03:32.240163] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.199 [2024-07-15 17:03:32.240323] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:42.199 TLSTESTn1 00:14:42.199 17:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:42.475 17:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:42.475 "subsystems": [ 00:14:42.475 { 00:14:42.475 "subsystem": "keyring", 00:14:42.475 "config": [] 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "subsystem": "iobuf", 00:14:42.475 "config": [ 00:14:42.475 { 00:14:42.475 "method": "iobuf_set_options", 00:14:42.475 "params": { 00:14:42.475 "small_pool_count": 8192, 00:14:42.475 "large_pool_count": 1024, 00:14:42.475 "small_bufsize": 8192, 00:14:42.475 "large_bufsize": 135168 00:14:42.475 } 00:14:42.475 } 00:14:42.475 ] 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "subsystem": "sock", 00:14:42.475 "config": [ 00:14:42.475 { 00:14:42.475 "method": "sock_set_default_impl", 00:14:42.475 "params": { 00:14:42.475 "impl_name": "uring" 00:14:42.475 } 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "method": "sock_impl_set_options", 00:14:42.475 "params": { 00:14:42.475 "impl_name": "ssl", 00:14:42.475 "recv_buf_size": 4096, 00:14:42.475 "send_buf_size": 4096, 00:14:42.475 "enable_recv_pipe": true, 00:14:42.475 "enable_quickack": false, 00:14:42.475 "enable_placement_id": 0, 00:14:42.475 "enable_zerocopy_send_server": true, 00:14:42.475 "enable_zerocopy_send_client": false, 00:14:42.475 "zerocopy_threshold": 0, 00:14:42.475 "tls_version": 0, 00:14:42.475 "enable_ktls": false 00:14:42.475 } 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "method": "sock_impl_set_options", 00:14:42.475 "params": { 00:14:42.475 "impl_name": "posix", 00:14:42.475 "recv_buf_size": 2097152, 00:14:42.475 "send_buf_size": 2097152, 00:14:42.475 "enable_recv_pipe": true, 00:14:42.475 "enable_quickack": false, 00:14:42.475 "enable_placement_id": 0, 00:14:42.475 "enable_zerocopy_send_server": true, 00:14:42.475 "enable_zerocopy_send_client": false, 00:14:42.475 "zerocopy_threshold": 0, 00:14:42.475 "tls_version": 0, 00:14:42.475 "enable_ktls": false 00:14:42.475 } 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "method": "sock_impl_set_options", 00:14:42.475 "params": { 00:14:42.475 "impl_name": "uring", 00:14:42.475 "recv_buf_size": 2097152, 00:14:42.475 "send_buf_size": 2097152, 00:14:42.475 "enable_recv_pipe": true, 00:14:42.475 "enable_quickack": false, 00:14:42.475 "enable_placement_id": 0, 00:14:42.475 "enable_zerocopy_send_server": false, 00:14:42.475 "enable_zerocopy_send_client": false, 00:14:42.475 "zerocopy_threshold": 0, 00:14:42.475 "tls_version": 0, 00:14:42.475 "enable_ktls": false 00:14:42.475 } 00:14:42.475 } 00:14:42.475 ] 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "subsystem": "vmd", 00:14:42.475 "config": [] 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "subsystem": "accel", 00:14:42.475 "config": [ 00:14:42.475 { 00:14:42.475 "method": "accel_set_options", 00:14:42.475 "params": { 00:14:42.475 "small_cache_size": 128, 00:14:42.475 "large_cache_size": 16, 00:14:42.475 "task_count": 2048, 00:14:42.475 "sequence_count": 2048, 00:14:42.475 "buf_count": 2048 00:14:42.475 } 00:14:42.475 } 00:14:42.475 ] 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "subsystem": "bdev", 00:14:42.475 "config": [ 00:14:42.475 { 00:14:42.475 "method": "bdev_set_options", 00:14:42.475 "params": { 00:14:42.475 "bdev_io_pool_size": 65535, 00:14:42.475 "bdev_io_cache_size": 256, 00:14:42.475 "bdev_auto_examine": true, 00:14:42.475 "iobuf_small_cache_size": 128, 00:14:42.475 "iobuf_large_cache_size": 16 00:14:42.475 } 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "method": "bdev_raid_set_options", 00:14:42.475 "params": { 00:14:42.475 "process_window_size_kb": 1024 00:14:42.475 } 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "method": "bdev_iscsi_set_options", 00:14:42.475 "params": { 00:14:42.475 "timeout_sec": 30 00:14:42.475 } 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "method": "bdev_nvme_set_options", 00:14:42.475 "params": { 00:14:42.475 "action_on_timeout": "none", 00:14:42.475 "timeout_us": 0, 00:14:42.475 "timeout_admin_us": 0, 00:14:42.475 "keep_alive_timeout_ms": 10000, 00:14:42.475 "arbitration_burst": 0, 00:14:42.475 "low_priority_weight": 0, 00:14:42.475 "medium_priority_weight": 0, 00:14:42.475 "high_priority_weight": 0, 00:14:42.475 "nvme_adminq_poll_period_us": 10000, 00:14:42.475 "nvme_ioq_poll_period_us": 0, 00:14:42.475 "io_queue_requests": 0, 00:14:42.475 "delay_cmd_submit": true, 00:14:42.475 "transport_retry_count": 4, 00:14:42.475 "bdev_retry_count": 3, 00:14:42.475 "transport_ack_timeout": 0, 00:14:42.475 "ctrlr_loss_timeout_sec": 0, 00:14:42.475 "reconnect_delay_sec": 0, 00:14:42.475 "fast_io_fail_timeout_sec": 0, 00:14:42.475 "disable_auto_failback": false, 00:14:42.475 "generate_uuids": false, 00:14:42.475 "transport_tos": 0, 00:14:42.475 "nvme_error_stat": false, 00:14:42.475 "rdma_srq_size": 0, 00:14:42.475 "io_path_stat": false, 00:14:42.475 "allow_accel_sequence": false, 00:14:42.475 "rdma_max_cq_size": 0, 00:14:42.475 "rdma_cm_event_timeout_ms": 0, 00:14:42.475 "dhchap_digests": [ 00:14:42.475 "sha256", 00:14:42.475 "sha384", 00:14:42.475 "sha512" 00:14:42.475 ], 00:14:42.475 "dhchap_dhgroups": [ 00:14:42.475 "null", 00:14:42.475 "ffdhe2048", 00:14:42.475 "ffdhe3072", 00:14:42.475 "ffdhe4096", 00:14:42.475 "ffdhe6144", 00:14:42.475 "ffdhe8192" 00:14:42.475 ] 00:14:42.475 } 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "method": "bdev_nvme_set_hotplug", 00:14:42.475 "params": { 00:14:42.475 "period_us": 100000, 00:14:42.475 "enable": false 00:14:42.475 } 00:14:42.475 }, 00:14:42.475 { 00:14:42.475 "method": "bdev_malloc_create", 00:14:42.475 "params": { 00:14:42.476 "name": "malloc0", 00:14:42.476 "num_blocks": 8192, 00:14:42.476 "block_size": 4096, 00:14:42.476 "physical_block_size": 4096, 00:14:42.476 "uuid": "a0633f24-ff06-4a84-8de2-77de37b13d51", 00:14:42.476 "optimal_io_boundary": 0 00:14:42.476 } 00:14:42.476 }, 00:14:42.476 { 00:14:42.476 "method": "bdev_wait_for_examine" 00:14:42.476 } 00:14:42.476 ] 00:14:42.476 }, 00:14:42.476 { 00:14:42.476 "subsystem": "nbd", 00:14:42.476 "config": [] 00:14:42.476 }, 00:14:42.476 { 00:14:42.476 "subsystem": "scheduler", 00:14:42.476 "config": [ 00:14:42.476 { 00:14:42.476 "method": "framework_set_scheduler", 00:14:42.476 "params": { 00:14:42.476 "name": "static" 00:14:42.476 } 00:14:42.476 } 00:14:42.476 ] 00:14:42.476 }, 00:14:42.476 { 00:14:42.476 "subsystem": "nvmf", 00:14:42.476 "config": [ 00:14:42.476 { 00:14:42.476 "method": "nvmf_set_config", 00:14:42.476 "params": { 00:14:42.476 "discovery_filter": "match_any", 00:14:42.476 "admin_cmd_passthru": { 00:14:42.476 "identify_ctrlr": false 00:14:42.476 } 00:14:42.476 } 00:14:42.476 }, 00:14:42.476 { 00:14:42.476 "method": "nvmf_set_max_subsystems", 00:14:42.476 "params": { 00:14:42.476 "max_subsystems": 1024 00:14:42.476 } 00:14:42.476 }, 00:14:42.476 { 00:14:42.476 "method": "nvmf_set_crdt", 00:14:42.476 "params": { 00:14:42.476 "crdt1": 0, 00:14:42.476 "crdt2": 0, 00:14:42.476 "crdt3": 0 00:14:42.476 } 00:14:42.476 }, 00:14:42.476 { 00:14:42.476 "method": "nvmf_create_transport", 00:14:42.476 "params": { 00:14:42.476 "trtype": "TCP", 00:14:42.476 "max_queue_depth": 128, 00:14:42.476 "max_io_qpairs_per_ctrlr": 127, 00:14:42.476 "in_capsule_data_size": 4096, 00:14:42.476 "max_io_size": 131072, 00:14:42.476 "io_unit_size": 131072, 00:14:42.476 "max_aq_depth": 128, 00:14:42.476 "num_shared_buffers": 511, 00:14:42.476 "buf_cache_size": 4294967295, 00:14:42.476 "dif_insert_or_strip": false, 00:14:42.476 "zcopy": false, 00:14:42.476 "c2h_success": false, 00:14:42.476 "sock_priority": 0, 00:14:42.476 "abort_timeout_sec": 1, 00:14:42.476 "ack_timeout": 0, 00:14:42.476 "data_wr_pool_size": 0 00:14:42.476 } 00:14:42.476 }, 00:14:42.476 { 00:14:42.476 "method": "nvmf_create_subsystem", 00:14:42.476 "params": { 00:14:42.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.476 "allow_any_host": false, 00:14:42.476 "serial_number": "SPDK00000000000001", 00:14:42.476 "model_number": "SPDK bdev Controller", 00:14:42.476 "max_namespaces": 10, 00:14:42.476 "min_cntlid": 1, 00:14:42.476 "max_cntlid": 65519, 00:14:42.476 "ana_reporting": false 00:14:42.476 } 00:14:42.476 }, 00:14:42.476 { 00:14:42.476 "method": "nvmf_subsystem_add_host", 00:14:42.476 "params": { 00:14:42.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.476 "host": "nqn.2016-06.io.spdk:host1", 00:14:42.476 "psk": "/tmp/tmp.XgfR7rpDIq" 00:14:42.476 } 00:14:42.476 }, 00:14:42.476 { 00:14:42.476 "method": "nvmf_subsystem_add_ns", 00:14:42.476 "params": { 00:14:42.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.476 "namespace": { 00:14:42.476 "nsid": 1, 00:14:42.476 "bdev_name": "malloc0", 00:14:42.476 "nguid": "A0633F24FF064A848DE277DE37B13D51", 00:14:42.476 "uuid": "a0633f24-ff06-4a84-8de2-77de37b13d51", 00:14:42.476 "no_auto_visible": false 00:14:42.476 } 00:14:42.476 } 00:14:42.476 }, 00:14:42.476 { 00:14:42.476 "method": "nvmf_subsystem_add_listener", 00:14:42.476 "params": { 00:14:42.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.476 "listen_address": { 00:14:42.476 "trtype": "TCP", 00:14:42.476 "adrfam": "IPv4", 00:14:42.476 "traddr": "10.0.0.2", 00:14:42.476 "trsvcid": "4420" 00:14:42.476 }, 00:14:42.476 "secure_channel": true 00:14:42.476 } 00:14:42.476 } 00:14:42.476 ] 00:14:42.476 } 00:14:42.476 ] 00:14:42.476 }' 00:14:42.476 17:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:43.043 17:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:43.043 "subsystems": [ 00:14:43.043 { 00:14:43.043 "subsystem": "keyring", 00:14:43.043 "config": [] 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "subsystem": "iobuf", 00:14:43.043 "config": [ 00:14:43.043 { 00:14:43.043 "method": "iobuf_set_options", 00:14:43.043 "params": { 00:14:43.043 "small_pool_count": 8192, 00:14:43.043 "large_pool_count": 1024, 00:14:43.043 "small_bufsize": 8192, 00:14:43.043 "large_bufsize": 135168 00:14:43.043 } 00:14:43.043 } 00:14:43.043 ] 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "subsystem": "sock", 00:14:43.043 "config": [ 00:14:43.043 { 00:14:43.043 "method": "sock_set_default_impl", 00:14:43.043 "params": { 00:14:43.043 "impl_name": "uring" 00:14:43.043 } 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "method": "sock_impl_set_options", 00:14:43.043 "params": { 00:14:43.043 "impl_name": "ssl", 00:14:43.043 "recv_buf_size": 4096, 00:14:43.043 "send_buf_size": 4096, 00:14:43.043 "enable_recv_pipe": true, 00:14:43.043 "enable_quickack": false, 00:14:43.043 "enable_placement_id": 0, 00:14:43.043 "enable_zerocopy_send_server": true, 00:14:43.043 "enable_zerocopy_send_client": false, 00:14:43.043 "zerocopy_threshold": 0, 00:14:43.043 "tls_version": 0, 00:14:43.043 "enable_ktls": false 00:14:43.043 } 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "method": "sock_impl_set_options", 00:14:43.043 "params": { 00:14:43.043 "impl_name": "posix", 00:14:43.043 "recv_buf_size": 2097152, 00:14:43.043 "send_buf_size": 2097152, 00:14:43.043 "enable_recv_pipe": true, 00:14:43.043 "enable_quickack": false, 00:14:43.043 "enable_placement_id": 0, 00:14:43.043 "enable_zerocopy_send_server": true, 00:14:43.043 "enable_zerocopy_send_client": false, 00:14:43.043 "zerocopy_threshold": 0, 00:14:43.043 "tls_version": 0, 00:14:43.043 "enable_ktls": false 00:14:43.043 } 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "method": "sock_impl_set_options", 00:14:43.043 "params": { 00:14:43.043 "impl_name": "uring", 00:14:43.043 "recv_buf_size": 2097152, 00:14:43.043 "send_buf_size": 2097152, 00:14:43.043 "enable_recv_pipe": true, 00:14:43.043 "enable_quickack": false, 00:14:43.043 "enable_placement_id": 0, 00:14:43.043 "enable_zerocopy_send_server": false, 00:14:43.043 "enable_zerocopy_send_client": false, 00:14:43.043 "zerocopy_threshold": 0, 00:14:43.043 "tls_version": 0, 00:14:43.043 "enable_ktls": false 00:14:43.043 } 00:14:43.043 } 00:14:43.043 ] 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "subsystem": "vmd", 00:14:43.043 "config": [] 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "subsystem": "accel", 00:14:43.043 "config": [ 00:14:43.043 { 00:14:43.043 "method": "accel_set_options", 00:14:43.043 "params": { 00:14:43.043 "small_cache_size": 128, 00:14:43.043 "large_cache_size": 16, 00:14:43.043 "task_count": 2048, 00:14:43.043 "sequence_count": 2048, 00:14:43.043 "buf_count": 2048 00:14:43.043 } 00:14:43.043 } 00:14:43.043 ] 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "subsystem": "bdev", 00:14:43.043 "config": [ 00:14:43.043 { 00:14:43.043 "method": "bdev_set_options", 00:14:43.043 "params": { 00:14:43.043 "bdev_io_pool_size": 65535, 00:14:43.043 "bdev_io_cache_size": 256, 00:14:43.043 "bdev_auto_examine": true, 00:14:43.043 "iobuf_small_cache_size": 128, 00:14:43.043 "iobuf_large_cache_size": 16 00:14:43.043 } 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "method": "bdev_raid_set_options", 00:14:43.043 "params": { 00:14:43.043 "process_window_size_kb": 1024 00:14:43.043 } 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "method": "bdev_iscsi_set_options", 00:14:43.043 "params": { 00:14:43.043 "timeout_sec": 30 00:14:43.043 } 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "method": "bdev_nvme_set_options", 00:14:43.043 "params": { 00:14:43.043 "action_on_timeout": "none", 00:14:43.043 "timeout_us": 0, 00:14:43.043 "timeout_admin_us": 0, 00:14:43.043 "keep_alive_timeout_ms": 10000, 00:14:43.043 "arbitration_burst": 0, 00:14:43.043 "low_priority_weight": 0, 00:14:43.043 "medium_priority_weight": 0, 00:14:43.043 "high_priority_weight": 0, 00:14:43.043 "nvme_adminq_poll_period_us": 10000, 00:14:43.043 "nvme_ioq_poll_period_us": 0, 00:14:43.043 "io_queue_requests": 512, 00:14:43.043 "delay_cmd_submit": true, 00:14:43.043 "transport_retry_count": 4, 00:14:43.043 "bdev_retry_count": 3, 00:14:43.043 "transport_ack_timeout": 0, 00:14:43.043 "ctrlr_loss_timeout_sec": 0, 00:14:43.043 "reconnect_delay_sec": 0, 00:14:43.043 "fast_io_fail_timeout_sec": 0, 00:14:43.043 "disable_auto_failback": false, 00:14:43.043 "generate_uuids": false, 00:14:43.043 "transport_tos": 0, 00:14:43.043 "nvme_error_stat": false, 00:14:43.043 "rdma_srq_size": 0, 00:14:43.043 "io_path_stat": false, 00:14:43.043 "allow_accel_sequence": false, 00:14:43.043 "rdma_max_cq_size": 0, 00:14:43.043 "rdma_cm_event_timeout_ms": 0, 00:14:43.043 "dhchap_digests": [ 00:14:43.043 "sha256", 00:14:43.043 "sha384", 00:14:43.043 "sha512" 00:14:43.043 ], 00:14:43.043 "dhchap_dhgroups": [ 00:14:43.043 "null", 00:14:43.043 "ffdhe2048", 00:14:43.043 "ffdhe3072", 00:14:43.043 "ffdhe4096", 00:14:43.043 "ffdhe6144", 00:14:43.043 "ffdhe8192" 00:14:43.043 ] 00:14:43.043 } 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "method": "bdev_nvme_attach_controller", 00:14:43.043 "params": { 00:14:43.043 "name": "TLSTEST", 00:14:43.043 "trtype": "TCP", 00:14:43.043 "adrfam": "IPv4", 00:14:43.043 "traddr": "10.0.0.2", 00:14:43.043 "trsvcid": "4420", 00:14:43.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.043 "prchk_reftag": false, 00:14:43.043 "prchk_guard": false, 00:14:43.043 "ctrlr_loss_timeout_sec": 0, 00:14:43.043 "reconnect_delay_sec": 0, 00:14:43.043 "fast_io_fail_timeout_sec": 0, 00:14:43.043 "psk": "/tmp/tmp.XgfR7rpDIq", 00:14:43.043 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.043 "hdgst": false, 00:14:43.043 "ddgst": false 00:14:43.043 } 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "method": "bdev_nvme_set_hotplug", 00:14:43.043 "params": { 00:14:43.043 "period_us": 100000, 00:14:43.043 "enable": false 00:14:43.043 } 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "method": "bdev_wait_for_examine" 00:14:43.043 } 00:14:43.043 ] 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "subsystem": "nbd", 00:14:43.043 "config": [] 00:14:43.043 } 00:14:43.043 ] 00:14:43.043 }' 00:14:43.043 17:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73598 00:14:43.043 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73598 ']' 00:14:43.043 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73598 00:14:43.043 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:43.043 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:43.043 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73598 00:14:43.043 killing process with pid 73598 00:14:43.043 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.043 00:14:43.043 Latency(us) 00:14:43.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.043 =================================================================================================================== 00:14:43.043 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73598' 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73598 00:14:43.044 [2024-07-15 17:03:33.075251] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73598 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73548 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73548 ']' 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73548 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73548 00:14:43.044 killing process with pid 73548 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73548' 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73548 00:14:43.044 [2024-07-15 17:03:33.324904] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:43.044 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73548 00:14:43.302 17:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:43.302 17:03:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.302 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:43.302 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.302 17:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:43.302 "subsystems": [ 00:14:43.302 { 00:14:43.302 "subsystem": "keyring", 00:14:43.302 "config": [] 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "subsystem": "iobuf", 00:14:43.302 "config": [ 00:14:43.302 { 00:14:43.302 "method": "iobuf_set_options", 00:14:43.302 "params": { 00:14:43.302 "small_pool_count": 8192, 00:14:43.302 "large_pool_count": 1024, 00:14:43.302 "small_bufsize": 8192, 00:14:43.302 "large_bufsize": 135168 00:14:43.302 } 00:14:43.302 } 00:14:43.302 ] 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "subsystem": "sock", 00:14:43.302 "config": [ 00:14:43.302 { 00:14:43.302 "method": "sock_set_default_impl", 00:14:43.302 "params": { 00:14:43.302 "impl_name": "uring" 00:14:43.302 } 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "method": "sock_impl_set_options", 00:14:43.302 "params": { 00:14:43.302 "impl_name": "ssl", 00:14:43.302 "recv_buf_size": 4096, 00:14:43.302 "send_buf_size": 4096, 00:14:43.302 "enable_recv_pipe": true, 00:14:43.302 "enable_quickack": false, 00:14:43.302 "enable_placement_id": 0, 00:14:43.302 "enable_zerocopy_send_server": true, 00:14:43.302 "enable_zerocopy_send_client": false, 00:14:43.302 "zerocopy_threshold": 0, 00:14:43.302 "tls_version": 0, 00:14:43.302 "enable_ktls": false 00:14:43.302 } 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "method": "sock_impl_set_options", 00:14:43.302 "params": { 00:14:43.302 "impl_name": "posix", 00:14:43.302 "recv_buf_size": 2097152, 00:14:43.302 "send_buf_size": 2097152, 00:14:43.302 "enable_recv_pipe": true, 00:14:43.302 "enable_quickack": false, 00:14:43.302 "enable_placement_id": 0, 00:14:43.302 "enable_zerocopy_send_server": true, 00:14:43.302 "enable_zerocopy_send_client": false, 00:14:43.302 "zerocopy_threshold": 0, 00:14:43.302 "tls_version": 0, 00:14:43.302 "enable_ktls": false 00:14:43.302 } 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "method": "sock_impl_set_options", 00:14:43.302 "params": { 00:14:43.302 "impl_name": "uring", 00:14:43.302 "recv_buf_size": 2097152, 00:14:43.302 "send_buf_size": 2097152, 00:14:43.302 "enable_recv_pipe": true, 00:14:43.302 "enable_quickack": false, 00:14:43.302 "enable_placement_id": 0, 00:14:43.302 "enable_zerocopy_send_server": false, 00:14:43.302 "enable_zerocopy_send_client": false, 00:14:43.302 "zerocopy_threshold": 0, 00:14:43.302 "tls_version": 0, 00:14:43.302 "enable_ktls": false 00:14:43.302 } 00:14:43.302 } 00:14:43.302 ] 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "subsystem": "vmd", 00:14:43.302 "config": [] 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "subsystem": "accel", 00:14:43.302 "config": [ 00:14:43.302 { 00:14:43.302 "method": "accel_set_options", 00:14:43.302 "params": { 00:14:43.302 "small_cache_size": 128, 00:14:43.302 "large_cache_size": 16, 00:14:43.302 "task_count": 2048, 00:14:43.302 "sequence_count": 2048, 00:14:43.302 "buf_count": 2048 00:14:43.302 } 00:14:43.302 } 00:14:43.302 ] 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "subsystem": "bdev", 00:14:43.302 "config": [ 00:14:43.302 { 00:14:43.302 "method": "bdev_set_options", 00:14:43.302 "params": { 00:14:43.302 "bdev_io_pool_size": 65535, 00:14:43.302 "bdev_io_cache_size": 256, 00:14:43.302 "bdev_auto_examine": true, 00:14:43.302 "iobuf_small_cache_size": 128, 00:14:43.302 "iobuf_large_cache_size": 16 00:14:43.302 } 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "method": "bdev_raid_set_options", 00:14:43.302 "params": { 00:14:43.302 "process_window_size_kb": 1024 00:14:43.302 } 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "method": "bdev_iscsi_set_options", 00:14:43.302 "params": { 00:14:43.302 "timeout_sec": 30 00:14:43.302 } 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "method": "bdev_nvme_set_options", 00:14:43.302 "params": { 00:14:43.302 "action_on_timeout": "none", 00:14:43.302 "timeout_us": 0, 00:14:43.302 "timeout_admin_us": 0, 00:14:43.302 "keep_alive_timeout_ms": 10000, 00:14:43.302 "arbitration_burst": 0, 00:14:43.302 "low_priority_weight": 0, 00:14:43.302 "medium_priority_weight": 0, 00:14:43.302 "high_priority_weight": 0, 00:14:43.302 "nvme_adminq_poll_period_us": 10000, 00:14:43.302 "nvme_ioq_poll_period_us": 0, 00:14:43.302 "io_queue_requests": 0, 00:14:43.302 "delay_cmd_submit": true, 00:14:43.302 "transport_retry_count": 4, 00:14:43.302 "bdev_retry_count": 3, 00:14:43.302 "transport_ack_timeout": 0, 00:14:43.302 "ctrlr_loss_timeout_sec": 0, 00:14:43.302 "reconnect_delay_sec": 0, 00:14:43.302 "fast_io_fail_timeout_sec": 0, 00:14:43.302 "disable_auto_failback": false, 00:14:43.302 "generate_uuids": false, 00:14:43.302 "transport_tos": 0, 00:14:43.302 "nvme_error_stat": false, 00:14:43.302 "rdma_srq_size": 0, 00:14:43.302 "io_path_stat": false, 00:14:43.302 "allow_accel_sequence": false, 00:14:43.302 "rdma_max_cq_size": 0, 00:14:43.302 "rdma_cm_event_timeout_ms": 0, 00:14:43.302 "dhchap_digests": [ 00:14:43.302 "sha256", 00:14:43.302 "sha384", 00:14:43.302 "sha512" 00:14:43.302 ], 00:14:43.302 "dhchap_dhgroups": [ 00:14:43.302 "null", 00:14:43.302 "ffdhe2048", 00:14:43.302 "ffdhe3072", 00:14:43.302 "ffdhe4096", 00:14:43.302 "ffdhe6144", 00:14:43.302 "ffdhe8192" 00:14:43.302 ] 00:14:43.302 } 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "method": "bdev_nvme_set_hotplug", 00:14:43.302 "params": { 00:14:43.302 "period_us": 100000, 00:14:43.302 "enable": false 00:14:43.302 } 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "method": "bdev_malloc_create", 00:14:43.302 "params": { 00:14:43.302 "name": "malloc0", 00:14:43.302 "num_blocks": 8192, 00:14:43.302 "block_size": 4096, 00:14:43.302 "physical_block_size": 4096, 00:14:43.302 "uuid": "a0633f24-ff06-4a84-8de2-77de37b13d51", 00:14:43.302 "optimal_io_boundary": 0 00:14:43.302 } 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "method": "bdev_wait_for_examine" 00:14:43.302 } 00:14:43.302 ] 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "subsystem": "nbd", 00:14:43.302 "config": [] 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "subsystem": "scheduler", 00:14:43.302 "config": [ 00:14:43.302 { 00:14:43.302 "method": "framework_set_scheduler", 00:14:43.302 "params": { 00:14:43.302 "name": "static" 00:14:43.302 } 00:14:43.302 } 00:14:43.302 ] 00:14:43.302 }, 00:14:43.302 { 00:14:43.302 "subsystem": "nvmf", 00:14:43.302 "config": [ 00:14:43.302 { 00:14:43.302 "method": "nvmf_set_config", 00:14:43.302 "params": { 00:14:43.302 "discovery_filter": "match_any", 00:14:43.303 "admin_cmd_passthru": { 00:14:43.303 "identify_ctrlr": false 00:14:43.303 } 00:14:43.303 } 00:14:43.303 }, 00:14:43.303 { 00:14:43.303 "method": "nvmf_set_max_subsystems", 00:14:43.303 "params": { 00:14:43.303 "max_subsystems": 1024 00:14:43.303 } 00:14:43.303 }, 00:14:43.303 { 00:14:43.303 "method": "nvmf_set_crdt", 00:14:43.303 "params": { 00:14:43.303 "crdt1": 0, 00:14:43.303 "crdt2": 0, 00:14:43.303 "crdt3": 0 00:14:43.303 } 00:14:43.303 }, 00:14:43.303 { 00:14:43.303 "method": "nvmf_create_transport", 00:14:43.303 "params": { 00:14:43.303 "trtype": "TCP", 00:14:43.303 "max_queue_depth": 128, 00:14:43.303 "max_io_qpairs_per_ctrlr": 127, 00:14:43.303 "in_capsule_data_size": 4096, 00:14:43.303 "max_io_size": 131072, 00:14:43.303 "io_unit_size": 131072, 00:14:43.303 "max_aq_depth": 128, 00:14:43.303 "num_shared_buffers": 511, 00:14:43.303 "buf_cache_size": 4294967295, 00:14:43.303 "dif_insert_or_strip": false, 00:14:43.303 "zcopy": false, 00:14:43.303 "c2h_success": false, 00:14:43.303 "sock_priority": 0, 00:14:43.303 "abort_timeout_sec": 1, 00:14:43.303 "ack_timeout": 0, 00:14:43.303 "data_wr_pool_size": 0 00:14:43.303 } 00:14:43.303 }, 00:14:43.303 { 00:14:43.303 "method": "nvmf_create_subsystem", 00:14:43.303 "params": { 00:14:43.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.303 "allow_any_host": false, 00:14:43.303 "serial_number": "SPDK00000000000001", 00:14:43.303 "model_number": "SPDK bdev Controller", 00:14:43.303 "max_namespaces": 10, 00:14:43.303 "min_cntlid": 1, 00:14:43.303 "max_cntlid": 65519, 00:14:43.303 "ana_reporting": false 00:14:43.303 } 00:14:43.303 }, 00:14:43.303 { 00:14:43.303 "method": "nvmf_subsystem_add_host", 00:14:43.303 "params": { 00:14:43.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.303 "host": "nqn.2016-06.io.spdk:host1", 00:14:43.303 "psk": "/tmp/tmp.XgfR7rpDIq" 00:14:43.303 } 00:14:43.303 }, 00:14:43.303 { 00:14:43.303 "method": "nvmf_subsystem_add_ns", 00:14:43.303 "params": { 00:14:43.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.303 "namespace": { 00:14:43.303 "nsid": 1, 00:14:43.303 "bdev_name": "malloc0", 00:14:43.303 "nguid": "A0633F24FF064A848DE277DE37B13D51", 00:14:43.303 "uuid": "a0633f24-ff06-4a84-8de2-77de37b13d51", 00:14:43.303 "no_auto_visible": false 00:14:43.303 } 00:14:43.303 } 00:14:43.303 }, 00:14:43.303 { 00:14:43.303 "method": "nvmf_subsystem_add_listener", 00:14:43.303 "params": { 00:14:43.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.303 "listen_address": { 00:14:43.303 "trtype": "TCP", 00:14:43.303 "adrfam": "IPv4", 00:14:43.303 "traddr": "10.0.0.2", 00:14:43.303 "trsvcid": "4420" 00:14:43.303 }, 00:14:43.303 "secure_channel": true 00:14:43.303 } 00:14:43.303 } 00:14:43.303 ] 00:14:43.303 } 00:14:43.303 ] 00:14:43.303 }' 00:14:43.303 17:03:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73647 00:14:43.303 17:03:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73647 00:14:43.303 17:03:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:43.303 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73647 ']' 00:14:43.303 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.303 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.303 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.303 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.303 17:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.561 [2024-07-15 17:03:33.633284] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:43.561 [2024-07-15 17:03:33.633687] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.561 [2024-07-15 17:03:33.778075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.819 [2024-07-15 17:03:33.908973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.819 [2024-07-15 17:03:33.909044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.819 [2024-07-15 17:03:33.909072] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.819 [2024-07-15 17:03:33.909084] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.819 [2024-07-15 17:03:33.909093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.819 [2024-07-15 17:03:33.909202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.819 [2024-07-15 17:03:34.081980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:44.077 [2024-07-15 17:03:34.157100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.077 [2024-07-15 17:03:34.173033] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:44.077 [2024-07-15 17:03:34.189029] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:44.077 [2024-07-15 17:03:34.189249] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.335 17:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.335 17:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:44.335 17:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:44.335 17:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:44.335 17:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.594 17:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.594 17:03:34 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73679 00:14:44.594 17:03:34 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73679 /var/tmp/bdevperf.sock 00:14:44.594 17:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73679 ']' 00:14:44.594 17:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.594 17:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.594 17:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.594 17:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.594 17:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.594 17:03:34 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:44.594 17:03:34 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:44.594 "subsystems": [ 00:14:44.594 { 00:14:44.594 "subsystem": "keyring", 00:14:44.594 "config": [] 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "subsystem": "iobuf", 00:14:44.594 "config": [ 00:14:44.594 { 00:14:44.594 "method": "iobuf_set_options", 00:14:44.594 "params": { 00:14:44.594 "small_pool_count": 8192, 00:14:44.594 "large_pool_count": 1024, 00:14:44.594 "small_bufsize": 8192, 00:14:44.594 "large_bufsize": 135168 00:14:44.594 } 00:14:44.594 } 00:14:44.594 ] 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "subsystem": "sock", 00:14:44.594 "config": [ 00:14:44.594 { 00:14:44.594 "method": "sock_set_default_impl", 00:14:44.594 "params": { 00:14:44.594 "impl_name": "uring" 00:14:44.594 } 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "method": "sock_impl_set_options", 00:14:44.594 "params": { 00:14:44.594 "impl_name": "ssl", 00:14:44.594 "recv_buf_size": 4096, 00:14:44.594 "send_buf_size": 4096, 00:14:44.594 "enable_recv_pipe": true, 00:14:44.594 "enable_quickack": false, 00:14:44.594 "enable_placement_id": 0, 00:14:44.594 "enable_zerocopy_send_server": true, 00:14:44.594 "enable_zerocopy_send_client": false, 00:14:44.594 "zerocopy_threshold": 0, 00:14:44.594 "tls_version": 0, 00:14:44.594 "enable_ktls": false 00:14:44.594 } 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "method": "sock_impl_set_options", 00:14:44.594 "params": { 00:14:44.594 "impl_name": "posix", 00:14:44.594 "recv_buf_size": 2097152, 00:14:44.594 "send_buf_size": 2097152, 00:14:44.594 "enable_recv_pipe": true, 00:14:44.594 "enable_quickack": false, 00:14:44.594 "enable_placement_id": 0, 00:14:44.594 "enable_zerocopy_send_server": true, 00:14:44.594 "enable_zerocopy_send_client": false, 00:14:44.594 "zerocopy_threshold": 0, 00:14:44.594 "tls_version": 0, 00:14:44.594 "enable_ktls": false 00:14:44.594 } 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "method": "sock_impl_set_options", 00:14:44.594 "params": { 00:14:44.594 "impl_name": "uring", 00:14:44.594 "recv_buf_size": 2097152, 00:14:44.594 "send_buf_size": 2097152, 00:14:44.594 "enable_recv_pipe": true, 00:14:44.594 "enable_quickack": false, 00:14:44.594 "enable_placement_id": 0, 00:14:44.594 "enable_zerocopy_send_server": false, 00:14:44.594 "enable_zerocopy_send_client": false, 00:14:44.594 "zerocopy_threshold": 0, 00:14:44.594 "tls_version": 0, 00:14:44.594 "enable_ktls": false 00:14:44.594 } 00:14:44.594 } 00:14:44.594 ] 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "subsystem": "vmd", 00:14:44.594 "config": [] 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "subsystem": "accel", 00:14:44.594 "config": [ 00:14:44.594 { 00:14:44.594 "method": "accel_set_options", 00:14:44.594 "params": { 00:14:44.594 "small_cache_size": 128, 00:14:44.594 "large_cache_size": 16, 00:14:44.594 "task_count": 2048, 00:14:44.594 "sequence_count": 2048, 00:14:44.594 "buf_count": 2048 00:14:44.594 } 00:14:44.594 } 00:14:44.594 ] 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "subsystem": "bdev", 00:14:44.594 "config": [ 00:14:44.594 { 00:14:44.594 "method": "bdev_set_options", 00:14:44.594 "params": { 00:14:44.594 "bdev_io_pool_size": 65535, 00:14:44.594 "bdev_io_cache_size": 256, 00:14:44.594 "bdev_auto_examine": true, 00:14:44.594 "iobuf_small_cache_size": 128, 00:14:44.594 "iobuf_large_cache_size": 16 00:14:44.594 } 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "method": "bdev_raid_set_options", 00:14:44.594 "params": { 00:14:44.594 "process_window_size_kb": 1024 00:14:44.594 } 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "method": "bdev_iscsi_set_options", 00:14:44.594 "params": { 00:14:44.594 "timeout_sec": 30 00:14:44.594 } 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "method": "bdev_nvme_set_options", 00:14:44.594 "params": { 00:14:44.594 "action_on_timeout": "none", 00:14:44.594 "timeout_us": 0, 00:14:44.594 "timeout_admin_us": 0, 00:14:44.594 "keep_alive_timeout_ms": 10000, 00:14:44.594 "arbitration_burst": 0, 00:14:44.594 "low_priority_weight": 0, 00:14:44.594 "medium_priority_weight": 0, 00:14:44.594 "high_priority_weight": 0, 00:14:44.594 "nvme_adminq_poll_period_us": 10000, 00:14:44.594 "nvme_ioq_poll_period_us": 0, 00:14:44.594 "io_queue_requests": 512, 00:14:44.594 "delay_cmd_submit": true, 00:14:44.594 "transport_retry_count": 4, 00:14:44.594 "bdev_retry_count": 3, 00:14:44.594 "transport_ack_timeout": 0, 00:14:44.594 "ctrlr_loss_timeout_sec": 0, 00:14:44.594 "reconnect_delay_sec": 0, 00:14:44.594 "fast_io_fail_timeout_sec": 0, 00:14:44.594 "disable_auto_failback": false, 00:14:44.594 "generate_uuids": false, 00:14:44.594 "transport_tos": 0, 00:14:44.594 "nvme_error_stat": false, 00:14:44.594 "rdma_srq_size": 0, 00:14:44.594 "io_path_stat": false, 00:14:44.594 "allow_accel_sequence": false, 00:14:44.594 "rdma_max_cq_size": 0, 00:14:44.594 "rdma_cm_event_timeout_ms": 0, 00:14:44.594 "dhchap_digests": [ 00:14:44.594 "sha256", 00:14:44.594 "sha384", 00:14:44.594 "sha512" 00:14:44.594 ], 00:14:44.594 "dhchap_dhgroups": [ 00:14:44.594 "null", 00:14:44.594 "ffdhe2048", 00:14:44.594 "ffdhe3072", 00:14:44.594 "ffdhe4096", 00:14:44.594 "ffdhe6144", 00:14:44.594 "ffdhe8192" 00:14:44.594 ] 00:14:44.594 } 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "method": "bdev_nvme_attach_controller", 00:14:44.594 "params": { 00:14:44.594 "name": "TLSTEST", 00:14:44.594 "trtype": "TCP", 00:14:44.594 "adrfam": "IPv4", 00:14:44.594 "traddr": "10.0.0.2", 00:14:44.594 "trsvcid": "4420", 00:14:44.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.594 "prchk_reftag": false, 00:14:44.594 "prchk_guard": false, 00:14:44.594 "ctrlr_loss_timeout_sec": 0, 00:14:44.594 "reconnect_delay_sec": 0, 00:14:44.594 "fast_io_fail_timeout_sec": 0, 00:14:44.594 "psk": "/tmp/tmp.XgfR7rpDIq", 00:14:44.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.594 "hdgst": false, 00:14:44.594 "ddgst": false 00:14:44.594 } 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "method": "bdev_nvme_set_hotplug", 00:14:44.594 "params": { 00:14:44.594 "period_us": 100000, 00:14:44.594 "enable": false 00:14:44.594 } 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "method": "bdev_wait_for_examine" 00:14:44.594 } 00:14:44.594 ] 00:14:44.594 }, 00:14:44.594 { 00:14:44.594 "subsystem": "nbd", 00:14:44.594 "config": [] 00:14:44.594 } 00:14:44.594 ] 00:14:44.594 }' 00:14:44.595 [2024-07-15 17:03:34.686807] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:44.595 [2024-07-15 17:03:34.686919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73679 ] 00:14:44.595 [2024-07-15 17:03:34.842531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.853 [2024-07-15 17:03:34.971798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.854 [2024-07-15 17:03:35.111870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:45.111 [2024-07-15 17:03:35.153247] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:45.111 [2024-07-15 17:03:35.153717] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:45.683 17:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.683 17:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:45.683 17:03:35 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:45.683 Running I/O for 10 seconds... 00:14:55.686 00:14:55.686 Latency(us) 00:14:55.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.686 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:55.686 Verification LBA range: start 0x0 length 0x2000 00:14:55.686 TLSTESTn1 : 10.02 3883.07 15.17 0.00 0.00 32899.17 7328.12 28001.75 00:14:55.686 =================================================================================================================== 00:14:55.686 Total : 3883.07 15.17 0.00 0.00 32899.17 7328.12 28001.75 00:14:55.686 0 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73679 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73679 ']' 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73679 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73679 00:14:55.686 killing process with pid 73679 00:14:55.686 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.686 00:14:55.686 Latency(us) 00:14:55.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.686 =================================================================================================================== 00:14:55.686 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73679' 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73679 00:14:55.686 [2024-07-15 17:03:45.894407] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:55.686 17:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73679 00:14:55.945 17:03:46 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73647 00:14:55.945 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73647 ']' 00:14:55.945 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73647 00:14:55.945 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:55.945 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.945 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73647 00:14:55.945 killing process with pid 73647 00:14:55.945 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:55.945 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:55.945 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73647' 00:14:55.945 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73647 00:14:55.945 [2024-07-15 17:03:46.157457] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:55.945 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73647 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73812 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73812 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73812 ']' 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.204 17:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.204 [2024-07-15 17:03:46.457982] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:56.204 [2024-07-15 17:03:46.458381] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.462 [2024-07-15 17:03:46.596620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.462 [2024-07-15 17:03:46.729261] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.462 [2024-07-15 17:03:46.729333] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.462 [2024-07-15 17:03:46.729383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.462 [2024-07-15 17:03:46.729405] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.462 [2024-07-15 17:03:46.729420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.462 [2024-07-15 17:03:46.729456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.720 [2024-07-15 17:03:46.788481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:57.285 17:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.285 17:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:57.285 17:03:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.285 17:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.285 17:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.285 17:03:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.285 17:03:47 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.XgfR7rpDIq 00:14:57.285 17:03:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XgfR7rpDIq 00:14:57.285 17:03:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:57.543 [2024-07-15 17:03:47.797584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.543 17:03:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:57.803 17:03:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:58.066 [2024-07-15 17:03:48.301680] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:58.066 [2024-07-15 17:03:48.301906] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.066 17:03:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:58.373 malloc0 00:14:58.373 17:03:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:58.647 17:03:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XgfR7rpDIq 00:14:58.905 [2024-07-15 17:03:49.017270] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:58.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.905 17:03:49 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73867 00:14:58.905 17:03:49 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:58.905 17:03:49 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:58.905 17:03:49 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73867 /var/tmp/bdevperf.sock 00:14:58.905 17:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73867 ']' 00:14:58.905 17:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.905 17:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.905 17:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.905 17:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.905 17:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.905 [2024-07-15 17:03:49.085606] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:58.905 [2024-07-15 17:03:49.085932] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73867 ] 00:14:59.163 [2024-07-15 17:03:49.217118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.164 [2024-07-15 17:03:49.363204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.164 [2024-07-15 17:03:49.434206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:00.100 17:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.100 17:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:00.100 17:03:50 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XgfR7rpDIq 00:15:00.358 17:03:50 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:00.616 [2024-07-15 17:03:50.674513] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.616 nvme0n1 00:15:00.616 17:03:50 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.616 Running I/O for 1 seconds... 00:15:01.989 00:15:01.990 Latency(us) 00:15:01.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.990 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:01.990 Verification LBA range: start 0x0 length 0x2000 00:15:01.990 nvme0n1 : 1.02 3736.41 14.60 0.00 0.00 33993.30 5153.51 35031.97 00:15:01.990 =================================================================================================================== 00:15:01.990 Total : 3736.41 14.60 0.00 0.00 33993.30 5153.51 35031.97 00:15:01.990 0 00:15:01.990 17:03:51 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 73867 00:15:01.990 17:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73867 ']' 00:15:01.990 17:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73867 00:15:01.990 17:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:01.990 17:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.990 17:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73867 00:15:01.990 killing process with pid 73867 00:15:01.990 Received shutdown signal, test time was about 1.000000 seconds 00:15:01.990 00:15:01.990 Latency(us) 00:15:01.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.990 =================================================================================================================== 00:15:01.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:01.990 17:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:01.990 17:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:01.990 17:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73867' 00:15:01.990 17:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73867 00:15:01.990 17:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73867 00:15:01.990 17:03:52 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73812 00:15:01.990 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73812 ']' 00:15:01.990 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73812 00:15:01.990 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:01.990 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.990 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73812 00:15:01.990 killing process with pid 73812 00:15:01.990 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.990 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.990 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73812' 00:15:01.990 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73812 00:15:01.990 [2024-07-15 17:03:52.284048] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:01.990 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73812 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73918 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73918 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73918 ']' 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.249 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.508 [2024-07-15 17:03:52.584999] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:02.508 [2024-07-15 17:03:52.585093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.508 [2024-07-15 17:03:52.721997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.767 [2024-07-15 17:03:52.831791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.767 [2024-07-15 17:03:52.831840] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.767 [2024-07-15 17:03:52.831852] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.767 [2024-07-15 17:03:52.831861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.767 [2024-07-15 17:03:52.831869] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.767 [2024-07-15 17:03:52.831895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.767 [2024-07-15 17:03:52.885203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:02.767 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.767 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:02.767 17:03:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.767 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.767 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.767 17:03:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.767 17:03:52 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:15:02.767 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.767 17:03:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.767 [2024-07-15 17:03:52.996437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.767 malloc0 00:15:02.767 [2024-07-15 17:03:53.027645] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:02.767 [2024-07-15 17:03:53.027838] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.767 17:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.767 17:03:53 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=73948 00:15:02.767 17:03:53 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 73948 /var/tmp/bdevperf.sock 00:15:02.767 17:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73948 ']' 00:15:02.767 17:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.767 17:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.767 17:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.767 17:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.767 17:03:53 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:02.767 17:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.026 [2024-07-15 17:03:53.109896] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:03.026 [2024-07-15 17:03:53.109989] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73948 ] 00:15:03.026 [2024-07-15 17:03:53.243723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.286 [2024-07-15 17:03:53.390504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.286 [2024-07-15 17:03:53.460707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:04.219 17:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.219 17:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:04.219 17:03:54 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XgfR7rpDIq 00:15:04.219 17:03:54 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:04.477 [2024-07-15 17:03:54.695674] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:04.477 nvme0n1 00:15:04.736 17:03:54 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:04.736 Running I/O for 1 seconds... 00:15:05.693 00:15:05.693 Latency(us) 00:15:05.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.693 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:05.693 Verification LBA range: start 0x0 length 0x2000 00:15:05.693 nvme0n1 : 1.03 3967.00 15.50 0.00 0.00 31887.08 7566.43 21805.61 00:15:05.693 =================================================================================================================== 00:15:05.693 Total : 3967.00 15.50 0.00 0.00 31887.08 7566.43 21805.61 00:15:05.693 0 00:15:05.693 17:03:55 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:15:05.693 17:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.693 17:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.951 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.951 17:03:56 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:15:05.951 "subsystems": [ 00:15:05.951 { 00:15:05.951 "subsystem": "keyring", 00:15:05.951 "config": [ 00:15:05.951 { 00:15:05.951 "method": "keyring_file_add_key", 00:15:05.951 "params": { 00:15:05.951 "name": "key0", 00:15:05.951 "path": "/tmp/tmp.XgfR7rpDIq" 00:15:05.951 } 00:15:05.951 } 00:15:05.951 ] 00:15:05.951 }, 00:15:05.951 { 00:15:05.951 "subsystem": "iobuf", 00:15:05.951 "config": [ 00:15:05.951 { 00:15:05.951 "method": "iobuf_set_options", 00:15:05.951 "params": { 00:15:05.951 "small_pool_count": 8192, 00:15:05.951 "large_pool_count": 1024, 00:15:05.951 "small_bufsize": 8192, 00:15:05.951 "large_bufsize": 135168 00:15:05.951 } 00:15:05.951 } 00:15:05.951 ] 00:15:05.951 }, 00:15:05.951 { 00:15:05.951 "subsystem": "sock", 00:15:05.951 "config": [ 00:15:05.951 { 00:15:05.951 "method": "sock_set_default_impl", 00:15:05.951 "params": { 00:15:05.951 "impl_name": "uring" 00:15:05.951 } 00:15:05.951 }, 00:15:05.951 { 00:15:05.951 "method": "sock_impl_set_options", 00:15:05.951 "params": { 00:15:05.951 "impl_name": "ssl", 00:15:05.951 "recv_buf_size": 4096, 00:15:05.951 "send_buf_size": 4096, 00:15:05.951 "enable_recv_pipe": true, 00:15:05.951 "enable_quickack": false, 00:15:05.951 "enable_placement_id": 0, 00:15:05.951 "enable_zerocopy_send_server": true, 00:15:05.951 "enable_zerocopy_send_client": false, 00:15:05.951 "zerocopy_threshold": 0, 00:15:05.951 "tls_version": 0, 00:15:05.951 "enable_ktls": false 00:15:05.951 } 00:15:05.951 }, 00:15:05.951 { 00:15:05.951 "method": "sock_impl_set_options", 00:15:05.951 "params": { 00:15:05.951 "impl_name": "posix", 00:15:05.951 "recv_buf_size": 2097152, 00:15:05.951 "send_buf_size": 2097152, 00:15:05.951 "enable_recv_pipe": true, 00:15:05.951 "enable_quickack": false, 00:15:05.951 "enable_placement_id": 0, 00:15:05.951 "enable_zerocopy_send_server": true, 00:15:05.951 "enable_zerocopy_send_client": false, 00:15:05.951 "zerocopy_threshold": 0, 00:15:05.951 "tls_version": 0, 00:15:05.951 "enable_ktls": false 00:15:05.951 } 00:15:05.951 }, 00:15:05.951 { 00:15:05.951 "method": "sock_impl_set_options", 00:15:05.951 "params": { 00:15:05.951 "impl_name": "uring", 00:15:05.951 "recv_buf_size": 2097152, 00:15:05.951 "send_buf_size": 2097152, 00:15:05.951 "enable_recv_pipe": true, 00:15:05.951 "enable_quickack": false, 00:15:05.951 "enable_placement_id": 0, 00:15:05.951 "enable_zerocopy_send_server": false, 00:15:05.951 "enable_zerocopy_send_client": false, 00:15:05.951 "zerocopy_threshold": 0, 00:15:05.951 "tls_version": 0, 00:15:05.951 "enable_ktls": false 00:15:05.951 } 00:15:05.951 } 00:15:05.951 ] 00:15:05.951 }, 00:15:05.951 { 00:15:05.951 "subsystem": "vmd", 00:15:05.951 "config": [] 00:15:05.951 }, 00:15:05.951 { 00:15:05.951 "subsystem": "accel", 00:15:05.951 "config": [ 00:15:05.951 { 00:15:05.951 "method": "accel_set_options", 00:15:05.951 "params": { 00:15:05.951 "small_cache_size": 128, 00:15:05.951 "large_cache_size": 16, 00:15:05.951 "task_count": 2048, 00:15:05.951 "sequence_count": 2048, 00:15:05.951 "buf_count": 2048 00:15:05.951 } 00:15:05.951 } 00:15:05.951 ] 00:15:05.951 }, 00:15:05.951 { 00:15:05.951 "subsystem": "bdev", 00:15:05.951 "config": [ 00:15:05.951 { 00:15:05.951 "method": "bdev_set_options", 00:15:05.951 "params": { 00:15:05.951 "bdev_io_pool_size": 65535, 00:15:05.951 "bdev_io_cache_size": 256, 00:15:05.951 "bdev_auto_examine": true, 00:15:05.951 "iobuf_small_cache_size": 128, 00:15:05.951 "iobuf_large_cache_size": 16 00:15:05.951 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "bdev_raid_set_options", 00:15:05.952 "params": { 00:15:05.952 "process_window_size_kb": 1024 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "bdev_iscsi_set_options", 00:15:05.952 "params": { 00:15:05.952 "timeout_sec": 30 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "bdev_nvme_set_options", 00:15:05.952 "params": { 00:15:05.952 "action_on_timeout": "none", 00:15:05.952 "timeout_us": 0, 00:15:05.952 "timeout_admin_us": 0, 00:15:05.952 "keep_alive_timeout_ms": 10000, 00:15:05.952 "arbitration_burst": 0, 00:15:05.952 "low_priority_weight": 0, 00:15:05.952 "medium_priority_weight": 0, 00:15:05.952 "high_priority_weight": 0, 00:15:05.952 "nvme_adminq_poll_period_us": 10000, 00:15:05.952 "nvme_ioq_poll_period_us": 0, 00:15:05.952 "io_queue_requests": 0, 00:15:05.952 "delay_cmd_submit": true, 00:15:05.952 "transport_retry_count": 4, 00:15:05.952 "bdev_retry_count": 3, 00:15:05.952 "transport_ack_timeout": 0, 00:15:05.952 "ctrlr_loss_timeout_sec": 0, 00:15:05.952 "reconnect_delay_sec": 0, 00:15:05.952 "fast_io_fail_timeout_sec": 0, 00:15:05.952 "disable_auto_failback": false, 00:15:05.952 "generate_uuids": false, 00:15:05.952 "transport_tos": 0, 00:15:05.952 "nvme_error_stat": false, 00:15:05.952 "rdma_srq_size": 0, 00:15:05.952 "io_path_stat": false, 00:15:05.952 "allow_accel_sequence": false, 00:15:05.952 "rdma_max_cq_size": 0, 00:15:05.952 "rdma_cm_event_timeout_ms": 0, 00:15:05.952 "dhchap_digests": [ 00:15:05.952 "sha256", 00:15:05.952 "sha384", 00:15:05.952 "sha512" 00:15:05.952 ], 00:15:05.952 "dhchap_dhgroups": [ 00:15:05.952 "null", 00:15:05.952 "ffdhe2048", 00:15:05.952 "ffdhe3072", 00:15:05.952 "ffdhe4096", 00:15:05.952 "ffdhe6144", 00:15:05.952 "ffdhe8192" 00:15:05.952 ] 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "bdev_nvme_set_hotplug", 00:15:05.952 "params": { 00:15:05.952 "period_us": 100000, 00:15:05.952 "enable": false 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "bdev_malloc_create", 00:15:05.952 "params": { 00:15:05.952 "name": "malloc0", 00:15:05.952 "num_blocks": 8192, 00:15:05.952 "block_size": 4096, 00:15:05.952 "physical_block_size": 4096, 00:15:05.952 "uuid": "3603fe72-7090-4957-8f3b-afbd4118bb47", 00:15:05.952 "optimal_io_boundary": 0 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "bdev_wait_for_examine" 00:15:05.952 } 00:15:05.952 ] 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "subsystem": "nbd", 00:15:05.952 "config": [] 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "subsystem": "scheduler", 00:15:05.952 "config": [ 00:15:05.952 { 00:15:05.952 "method": "framework_set_scheduler", 00:15:05.952 "params": { 00:15:05.952 "name": "static" 00:15:05.952 } 00:15:05.952 } 00:15:05.952 ] 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "subsystem": "nvmf", 00:15:05.952 "config": [ 00:15:05.952 { 00:15:05.952 "method": "nvmf_set_config", 00:15:05.952 "params": { 00:15:05.952 "discovery_filter": "match_any", 00:15:05.952 "admin_cmd_passthru": { 00:15:05.952 "identify_ctrlr": false 00:15:05.952 } 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "nvmf_set_max_subsystems", 00:15:05.952 "params": { 00:15:05.952 "max_subsystems": 1024 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "nvmf_set_crdt", 00:15:05.952 "params": { 00:15:05.952 "crdt1": 0, 00:15:05.952 "crdt2": 0, 00:15:05.952 "crdt3": 0 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "nvmf_create_transport", 00:15:05.952 "params": { 00:15:05.952 "trtype": "TCP", 00:15:05.952 "max_queue_depth": 128, 00:15:05.952 "max_io_qpairs_per_ctrlr": 127, 00:15:05.952 "in_capsule_data_size": 4096, 00:15:05.952 "max_io_size": 131072, 00:15:05.952 "io_unit_size": 131072, 00:15:05.952 "max_aq_depth": 128, 00:15:05.952 "num_shared_buffers": 511, 00:15:05.952 "buf_cache_size": 4294967295, 00:15:05.952 "dif_insert_or_strip": false, 00:15:05.952 "zcopy": false, 00:15:05.952 "c2h_success": false, 00:15:05.952 "sock_priority": 0, 00:15:05.952 "abort_timeout_sec": 1, 00:15:05.952 "ack_timeout": 0, 00:15:05.952 "data_wr_pool_size": 0 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "nvmf_create_subsystem", 00:15:05.952 "params": { 00:15:05.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.952 "allow_any_host": false, 00:15:05.952 "serial_number": "00000000000000000000", 00:15:05.952 "model_number": "SPDK bdev Controller", 00:15:05.952 "max_namespaces": 32, 00:15:05.952 "min_cntlid": 1, 00:15:05.952 "max_cntlid": 65519, 00:15:05.952 "ana_reporting": false 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "nvmf_subsystem_add_host", 00:15:05.952 "params": { 00:15:05.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.952 "host": "nqn.2016-06.io.spdk:host1", 00:15:05.952 "psk": "key0" 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "nvmf_subsystem_add_ns", 00:15:05.952 "params": { 00:15:05.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.952 "namespace": { 00:15:05.952 "nsid": 1, 00:15:05.952 "bdev_name": "malloc0", 00:15:05.952 "nguid": "3603FE72709049578F3BAFBD4118BB47", 00:15:05.952 "uuid": "3603fe72-7090-4957-8f3b-afbd4118bb47", 00:15:05.952 "no_auto_visible": false 00:15:05.952 } 00:15:05.952 } 00:15:05.952 }, 00:15:05.952 { 00:15:05.952 "method": "nvmf_subsystem_add_listener", 00:15:05.952 "params": { 00:15:05.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.952 "listen_address": { 00:15:05.952 "trtype": "TCP", 00:15:05.952 "adrfam": "IPv4", 00:15:05.952 "traddr": "10.0.0.2", 00:15:05.952 "trsvcid": "4420" 00:15:05.952 }, 00:15:05.952 "secure_channel": false, 00:15:05.952 "sock_impl": "ssl" 00:15:05.952 } 00:15:05.952 } 00:15:05.952 ] 00:15:05.952 } 00:15:05.952 ] 00:15:05.952 }' 00:15:05.952 17:03:56 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:06.211 17:03:56 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:15:06.211 "subsystems": [ 00:15:06.211 { 00:15:06.211 "subsystem": "keyring", 00:15:06.211 "config": [ 00:15:06.211 { 00:15:06.211 "method": "keyring_file_add_key", 00:15:06.211 "params": { 00:15:06.211 "name": "key0", 00:15:06.211 "path": "/tmp/tmp.XgfR7rpDIq" 00:15:06.211 } 00:15:06.211 } 00:15:06.211 ] 00:15:06.211 }, 00:15:06.211 { 00:15:06.211 "subsystem": "iobuf", 00:15:06.211 "config": [ 00:15:06.211 { 00:15:06.211 "method": "iobuf_set_options", 00:15:06.211 "params": { 00:15:06.211 "small_pool_count": 8192, 00:15:06.211 "large_pool_count": 1024, 00:15:06.211 "small_bufsize": 8192, 00:15:06.211 "large_bufsize": 135168 00:15:06.211 } 00:15:06.211 } 00:15:06.211 ] 00:15:06.211 }, 00:15:06.211 { 00:15:06.211 "subsystem": "sock", 00:15:06.211 "config": [ 00:15:06.211 { 00:15:06.211 "method": "sock_set_default_impl", 00:15:06.211 "params": { 00:15:06.211 "impl_name": "uring" 00:15:06.211 } 00:15:06.211 }, 00:15:06.211 { 00:15:06.211 "method": "sock_impl_set_options", 00:15:06.211 "params": { 00:15:06.211 "impl_name": "ssl", 00:15:06.211 "recv_buf_size": 4096, 00:15:06.211 "send_buf_size": 4096, 00:15:06.211 "enable_recv_pipe": true, 00:15:06.211 "enable_quickack": false, 00:15:06.211 "enable_placement_id": 0, 00:15:06.211 "enable_zerocopy_send_server": true, 00:15:06.211 "enable_zerocopy_send_client": false, 00:15:06.211 "zerocopy_threshold": 0, 00:15:06.211 "tls_version": 0, 00:15:06.211 "enable_ktls": false 00:15:06.211 } 00:15:06.211 }, 00:15:06.211 { 00:15:06.211 "method": "sock_impl_set_options", 00:15:06.211 "params": { 00:15:06.211 "impl_name": "posix", 00:15:06.211 "recv_buf_size": 2097152, 00:15:06.211 "send_buf_size": 2097152, 00:15:06.211 "enable_recv_pipe": true, 00:15:06.211 "enable_quickack": false, 00:15:06.211 "enable_placement_id": 0, 00:15:06.211 "enable_zerocopy_send_server": true, 00:15:06.211 "enable_zerocopy_send_client": false, 00:15:06.211 "zerocopy_threshold": 0, 00:15:06.211 "tls_version": 0, 00:15:06.211 "enable_ktls": false 00:15:06.211 } 00:15:06.211 }, 00:15:06.211 { 00:15:06.211 "method": "sock_impl_set_options", 00:15:06.211 "params": { 00:15:06.211 "impl_name": "uring", 00:15:06.211 "recv_buf_size": 2097152, 00:15:06.211 "send_buf_size": 2097152, 00:15:06.211 "enable_recv_pipe": true, 00:15:06.211 "enable_quickack": false, 00:15:06.211 "enable_placement_id": 0, 00:15:06.211 "enable_zerocopy_send_server": false, 00:15:06.211 "enable_zerocopy_send_client": false, 00:15:06.211 "zerocopy_threshold": 0, 00:15:06.211 "tls_version": 0, 00:15:06.211 "enable_ktls": false 00:15:06.211 } 00:15:06.211 } 00:15:06.211 ] 00:15:06.211 }, 00:15:06.211 { 00:15:06.211 "subsystem": "vmd", 00:15:06.211 "config": [] 00:15:06.211 }, 00:15:06.211 { 00:15:06.211 "subsystem": "accel", 00:15:06.211 "config": [ 00:15:06.211 { 00:15:06.211 "method": "accel_set_options", 00:15:06.211 "params": { 00:15:06.211 "small_cache_size": 128, 00:15:06.211 "large_cache_size": 16, 00:15:06.211 "task_count": 2048, 00:15:06.211 "sequence_count": 2048, 00:15:06.211 "buf_count": 2048 00:15:06.211 } 00:15:06.211 } 00:15:06.211 ] 00:15:06.211 }, 00:15:06.211 { 00:15:06.211 "subsystem": "bdev", 00:15:06.211 "config": [ 00:15:06.211 { 00:15:06.211 "method": "bdev_set_options", 00:15:06.211 "params": { 00:15:06.211 "bdev_io_pool_size": 65535, 00:15:06.211 "bdev_io_cache_size": 256, 00:15:06.211 "bdev_auto_examine": true, 00:15:06.211 "iobuf_small_cache_size": 128, 00:15:06.211 "iobuf_large_cache_size": 16 00:15:06.211 } 00:15:06.211 }, 00:15:06.211 { 00:15:06.211 "method": "bdev_raid_set_options", 00:15:06.211 "params": { 00:15:06.211 "process_window_size_kb": 1024 00:15:06.211 } 00:15:06.211 }, 00:15:06.211 { 00:15:06.211 "method": "bdev_iscsi_set_options", 00:15:06.211 "params": { 00:15:06.211 "timeout_sec": 30 00:15:06.211 } 00:15:06.211 }, 00:15:06.211 { 00:15:06.211 "method": "bdev_nvme_set_options", 00:15:06.211 "params": { 00:15:06.211 "action_on_timeout": "none", 00:15:06.211 "timeout_us": 0, 00:15:06.211 "timeout_admin_us": 0, 00:15:06.211 "keep_alive_timeout_ms": 10000, 00:15:06.212 "arbitration_burst": 0, 00:15:06.212 "low_priority_weight": 0, 00:15:06.212 "medium_priority_weight": 0, 00:15:06.212 "high_priority_weight": 0, 00:15:06.212 "nvme_adminq_poll_period_us": 10000, 00:15:06.212 "nvme_ioq_poll_period_us": 0, 00:15:06.212 "io_queue_requests": 512, 00:15:06.212 "delay_cmd_submit": true, 00:15:06.212 "transport_retry_count": 4, 00:15:06.212 "bdev_retry_count": 3, 00:15:06.212 "transport_ack_timeout": 0, 00:15:06.212 "ctrlr_loss_timeout_sec": 0, 00:15:06.212 "reconnect_delay_sec": 0, 00:15:06.212 "fast_io_fail_timeout_sec": 0, 00:15:06.212 "disable_auto_failback": false, 00:15:06.212 "generate_uuids": false, 00:15:06.212 "transport_tos": 0, 00:15:06.212 "nvme_error_stat": false, 00:15:06.212 "rdma_srq_size": 0, 00:15:06.212 "io_path_stat": false, 00:15:06.212 "allow_accel_sequence": false, 00:15:06.212 "rdma_max_cq_size": 0, 00:15:06.212 "rdma_cm_event_timeout_ms": 0, 00:15:06.212 "dhchap_digests": [ 00:15:06.212 "sha256", 00:15:06.212 "sha384", 00:15:06.212 "sha512" 00:15:06.212 ], 00:15:06.212 "dhchap_dhgroups": [ 00:15:06.212 "null", 00:15:06.212 "ffdhe2048", 00:15:06.212 "ffdhe3072", 00:15:06.212 "ffdhe4096", 00:15:06.212 "ffdhe6144", 00:15:06.212 "ffdhe8192" 00:15:06.212 ] 00:15:06.212 } 00:15:06.212 }, 00:15:06.212 { 00:15:06.212 "method": "bdev_nvme_attach_controller", 00:15:06.212 "params": { 00:15:06.212 "name": "nvme0", 00:15:06.212 "trtype": "TCP", 00:15:06.212 "adrfam": "IPv4", 00:15:06.212 "traddr": "10.0.0.2", 00:15:06.212 "trsvcid": "4420", 00:15:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.212 "prchk_reftag": false, 00:15:06.212 "prchk_guard": false, 00:15:06.212 "ctrlr_loss_timeout_sec": 0, 00:15:06.212 "reconnect_delay_sec": 0, 00:15:06.212 "fast_io_fail_timeout_sec": 0, 00:15:06.212 "psk": "key0", 00:15:06.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:06.212 "hdgst": false, 00:15:06.212 "ddgst": false 00:15:06.212 } 00:15:06.212 }, 00:15:06.212 { 00:15:06.212 "method": "bdev_nvme_set_hotplug", 00:15:06.212 "params": { 00:15:06.212 "period_us": 100000, 00:15:06.212 "enable": false 00:15:06.212 } 00:15:06.212 }, 00:15:06.212 { 00:15:06.212 "method": "bdev_enable_histogram", 00:15:06.212 "params": { 00:15:06.212 "name": "nvme0n1", 00:15:06.212 "enable": true 00:15:06.212 } 00:15:06.212 }, 00:15:06.212 { 00:15:06.212 "method": "bdev_wait_for_examine" 00:15:06.212 } 00:15:06.212 ] 00:15:06.212 }, 00:15:06.212 { 00:15:06.212 "subsystem": "nbd", 00:15:06.212 "config": [] 00:15:06.212 } 00:15:06.212 ] 00:15:06.212 }' 00:15:06.212 17:03:56 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 73948 00:15:06.212 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73948 ']' 00:15:06.212 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73948 00:15:06.212 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:06.212 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.212 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73948 00:15:06.212 killing process with pid 73948 00:15:06.212 Received shutdown signal, test time was about 1.000000 seconds 00:15:06.212 00:15:06.212 Latency(us) 00:15:06.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.212 =================================================================================================================== 00:15:06.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.212 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:06.212 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:06.212 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73948' 00:15:06.212 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73948 00:15:06.212 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73948 00:15:06.471 17:03:56 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 73918 00:15:06.471 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73918 ']' 00:15:06.471 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73918 00:15:06.471 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:06.471 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.471 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73918 00:15:06.471 killing process with pid 73918 00:15:06.471 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:06.471 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:06.471 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73918' 00:15:06.471 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73918 00:15:06.471 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73918 00:15:06.730 17:03:56 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:15:06.730 17:03:56 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:15:06.730 "subsystems": [ 00:15:06.730 { 00:15:06.730 "subsystem": "keyring", 00:15:06.730 "config": [ 00:15:06.730 { 00:15:06.730 "method": "keyring_file_add_key", 00:15:06.730 "params": { 00:15:06.730 "name": "key0", 00:15:06.731 "path": "/tmp/tmp.XgfR7rpDIq" 00:15:06.731 } 00:15:06.731 } 00:15:06.731 ] 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "subsystem": "iobuf", 00:15:06.731 "config": [ 00:15:06.731 { 00:15:06.731 "method": "iobuf_set_options", 00:15:06.731 "params": { 00:15:06.731 "small_pool_count": 8192, 00:15:06.731 "large_pool_count": 1024, 00:15:06.731 "small_bufsize": 8192, 00:15:06.731 "large_bufsize": 135168 00:15:06.731 } 00:15:06.731 } 00:15:06.731 ] 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "subsystem": "sock", 00:15:06.731 "config": [ 00:15:06.731 { 00:15:06.731 "method": "sock_set_default_impl", 00:15:06.731 "params": { 00:15:06.731 "impl_name": "uring" 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "sock_impl_set_options", 00:15:06.731 "params": { 00:15:06.731 "impl_name": "ssl", 00:15:06.731 "recv_buf_size": 4096, 00:15:06.731 "send_buf_size": 4096, 00:15:06.731 "enable_recv_pipe": true, 00:15:06.731 "enable_quickack": false, 00:15:06.731 "enable_placement_id": 0, 00:15:06.731 "enable_zerocopy_send_server": true, 00:15:06.731 "enable_zerocopy_send_client": false, 00:15:06.731 "zerocopy_threshold": 0, 00:15:06.731 "tls_version": 0, 00:15:06.731 "enable_ktls": false 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "sock_impl_set_options", 00:15:06.731 "params": { 00:15:06.731 "impl_name": "posix", 00:15:06.731 "recv_buf_size": 2097152, 00:15:06.731 "send_buf_size": 2097152, 00:15:06.731 "enable_recv_pipe": true, 00:15:06.731 "enable_quickack": false, 00:15:06.731 "enable_placement_id": 0, 00:15:06.731 "enable_zerocopy_send_server": true, 00:15:06.731 "enable_zerocopy_send_client": false, 00:15:06.731 "zerocopy_threshold": 0, 00:15:06.731 "tls_version": 0, 00:15:06.731 "enable_ktls": false 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "sock_impl_set_options", 00:15:06.731 "params": { 00:15:06.731 "impl_name": "uring", 00:15:06.731 "recv_buf_size": 2097152, 00:15:06.731 "send_buf_size": 2097152, 00:15:06.731 "enable_recv_pipe": true, 00:15:06.731 "enable_quickack": false, 00:15:06.731 "enable_placement_id": 0, 00:15:06.731 "enable_zerocopy_send_server": false, 00:15:06.731 "enable_zerocopy_send_client": false, 00:15:06.731 "zerocopy_threshold": 0, 00:15:06.731 "tls_version": 0, 00:15:06.731 "enable_ktls": false 00:15:06.731 } 00:15:06.731 } 00:15:06.731 ] 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "subsystem": "vmd", 00:15:06.731 "config": [] 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "subsystem": "accel", 00:15:06.731 "config": [ 00:15:06.731 { 00:15:06.731 "method": "accel_set_options", 00:15:06.731 "params": { 00:15:06.731 "small_cache_size": 128, 00:15:06.731 "large_cache_size": 16, 00:15:06.731 "task_count": 2048, 00:15:06.731 "sequence_count": 2048, 00:15:06.731 "buf_count": 2048 00:15:06.731 } 00:15:06.731 } 00:15:06.731 ] 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "subsystem": "bdev", 00:15:06.731 "config": [ 00:15:06.731 { 00:15:06.731 "method": "bdev_set_options", 00:15:06.731 "params": { 00:15:06.731 "bdev_io_pool_size": 65535, 00:15:06.731 "bdev_io_cache_size": 256, 00:15:06.731 "bdev_auto_examine": true, 00:15:06.731 "iobuf_small_cache_size": 128, 00:15:06.731 "iobuf_large_cache_size": 16 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "bdev_raid_set_options", 00:15:06.731 "params": { 00:15:06.731 "process_window_size_kb": 1024 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "bdev_iscsi_set_options", 00:15:06.731 "params": { 00:15:06.731 "timeout_sec": 30 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "bdev_nvme_set_options", 00:15:06.731 "params": { 00:15:06.731 "action_on_timeout": "none", 00:15:06.731 "timeout_us": 0, 00:15:06.731 "timeout_admin_us": 0, 00:15:06.731 "keep_alive_timeout_ms": 10000, 00:15:06.731 "arbitration_burst": 0, 00:15:06.731 "low_priority_weight": 0, 00:15:06.731 "medium_priority_weight": 0, 00:15:06.731 "high_priority_weight": 0, 00:15:06.731 "nvme_adminq_poll_period_us": 10000, 00:15:06.731 "nvme_ioq_poll_period_us": 0, 00:15:06.731 "io_queue_requests": 0, 00:15:06.731 "delay_cmd_submit": true, 00:15:06.731 "transport_retry_count": 4, 00:15:06.731 "bdev_retry_count": 3, 00:15:06.731 "transport_ack_timeout": 0, 00:15:06.731 "ctrlr_loss_timeout_sec": 0, 00:15:06.731 "reconnect_delay_sec": 0, 00:15:06.731 "fast_io_fail_timeout_sec": 0, 00:15:06.731 "disable_auto_failback": false, 00:15:06.731 "generate_uuids": false, 00:15:06.731 "transport_tos": 0, 00:15:06.731 "nvme_error_stat": false, 00:15:06.731 "rdma_srq_size": 0, 00:15:06.731 "io_path_stat": false, 00:15:06.731 "allow_accel_sequence": false, 00:15:06.731 "rdma_max_cq_size": 0, 00:15:06.731 "rdma_cm_event_timeout_ms": 0, 00:15:06.731 "dhchap_digests": [ 00:15:06.731 "sha256", 00:15:06.731 "sha384", 00:15:06.731 "sha512" 00:15:06.731 ], 00:15:06.731 "dhchap_dhgroups": [ 00:15:06.731 "null", 00:15:06.731 "ffdhe2048", 00:15:06.731 "ffdhe3072", 00:15:06.731 "ffdhe4096", 00:15:06.731 "ffdhe6144", 00:15:06.731 "ffdhe8192" 00:15:06.731 ] 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "bdev_nvme_set_hotplug", 00:15:06.731 "params": { 00:15:06.731 "period_us": 100000, 00:15:06.731 "enable": false 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "bdev_malloc_create", 00:15:06.731 "params": { 00:15:06.731 "name": "malloc0", 00:15:06.731 "num_blocks": 8192, 00:15:06.731 "block_size": 4096, 00:15:06.731 "physical_block_size": 4096, 00:15:06.731 "uuid": "3603fe72-7090-4957-8f3b-afbd4118bb47", 00:15:06.731 "optimal_io_boundary": 0 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "bdev_wait_for_examine" 00:15:06.731 } 00:15:06.731 ] 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "subsystem": "nbd", 00:15:06.731 "config": [] 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "subsystem": "scheduler", 00:15:06.731 "config": [ 00:15:06.731 { 00:15:06.731 "method": "framework_set_scheduler", 00:15:06.731 "params": { 00:15:06.731 "name": "static" 00:15:06.731 } 00:15:06.731 } 00:15:06.731 ] 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "subsystem": "nvmf", 00:15:06.731 "config": [ 00:15:06.731 { 00:15:06.731 "method": "nvmf_set_config", 00:15:06.731 "params": { 00:15:06.731 "discovery_filter": "match_any", 00:15:06.731 "admin_cmd_passthru": { 00:15:06.731 "identify_ctrlr": false 00:15:06.731 } 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "nvmf_set_max_subsystems", 00:15:06.731 "params": { 00:15:06.731 "max_subsystems": 1024 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "nvmf_set_crdt", 00:15:06.731 "params": { 00:15:06.731 "crdt1": 0, 00:15:06.731 "crdt2": 0, 00:15:06.731 "crdt3": 0 00:15:06.731 } 00:15:06.731 }, 00:15:06.731 { 00:15:06.731 "method": "nvmf_create_transport", 00:15:06.731 "params": { 00:15:06.731 "trtype": "TCP", 00:15:06.731 "max_queue_depth": 128, 00:15:06.731 "max_io_qpairs_per_ctrlr": 127, 00:15:06.731 "in_capsule_data_size": 4096, 00:15:06.731 "max_io_size": 131072, 00:15:06.731 "io_unit_size": 131072, 00:15:06.732 "max_aq_depth": 128, 00:15:06.732 "num_shared_buffers": 511, 00:15:06.732 "buf_cache_size": 4294967295, 00:15:06.732 "dif_insert_or_strip": false, 00:15:06.732 "zcopy": false, 00:15:06.732 "c2h_success": false, 00:15:06.732 "sock_priority": 0, 00:15:06.732 "abort_timeout_sec": 1, 00:15:06.732 "ack_timeout": 0, 00:15:06.732 "data_wr_pool_size": 0 00:15:06.732 } 00:15:06.732 }, 00:15:06.732 { 00:15:06.732 "method": "nvmf_create_subsystem", 00:15:06.732 "params": { 00:15:06.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.732 "allow_any_host": false, 00:15:06.732 "serial_number": "00000000000000000000", 00:15:06.732 "model_number": "SPDK bdev Controller", 00:15:06.732 "max_namespaces": 32, 00:15:06.732 "min_cntlid": 1, 00:15:06.732 "max_cntlid": 65519, 00:15:06.732 "ana_reporting": false 00:15:06.732 } 00:15:06.732 }, 00:15:06.732 { 00:15:06.732 "method": "nvmf_subsystem_add_host", 00:15:06.732 "params": { 00:15:06.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.732 "host": "nqn.2016-06.io.spdk:host1", 00:15:06.732 "psk": "key0" 00:15:06.732 } 00:15:06.732 }, 00:15:06.732 { 00:15:06.732 "method": "nvmf_subsystem_add_ns", 00:15:06.732 "params": { 00:15:06.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.732 "namespace": { 00:15:06.732 "nsid": 1, 00:15:06.732 "bdev_name": "malloc0", 00:15:06.732 "nguid": "3603FE72709049578F3BAFBD4118BB47", 00:15:06.732 "uuid": "3603fe72-7090-4957-8f3b-afbd4118bb47", 00:15:06.732 "no_auto_visible": false 00:15:06.732 } 00:15:06.732 } 00:15:06.732 }, 00:15:06.732 { 00:15:06.732 "method": "nvmf_subsystem_add_listener", 00:15:06.732 "params": { 00:15:06.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.732 "listen_address": { 00:15:06.732 "trtype": "TCP", 00:15:06.732 "adrfam": "IPv4", 00:15:06.732 "traddr": "10.0.0.2", 00:15:06.732 "trsvcid": "4420" 00:15:06.732 }, 00:15:06.732 "secure_channel": false, 00:15:06.732 "sock_impl": "ssl" 00:15:06.732 } 00:15:06.732 } 00:15:06.732 ] 00:15:06.732 } 00:15:06.732 ] 00:15:06.732 }' 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74009 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74009 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:06.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74009 ']' 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.732 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.990 [2024-07-15 17:03:57.059475] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:06.990 [2024-07-15 17:03:57.059919] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.990 [2024-07-15 17:03:57.194560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.249 [2024-07-15 17:03:57.301353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.249 [2024-07-15 17:03:57.301414] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.249 [2024-07-15 17:03:57.301442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.249 [2024-07-15 17:03:57.301451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.249 [2024-07-15 17:03:57.301458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.249 [2024-07-15 17:03:57.301534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.249 [2024-07-15 17:03:57.469852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:07.508 [2024-07-15 17:03:57.546838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.508 [2024-07-15 17:03:57.578765] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:07.508 [2024-07-15 17:03:57.579006] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.767 17:03:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.767 17:03:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:07.767 17:03:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:07.767 17:03:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.767 17:03:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.767 17:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.767 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=74041 00:15:07.767 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 74041 /var/tmp/bdevperf.sock 00:15:07.767 17:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74041 ']' 00:15:07.767 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:07.767 17:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.767 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:15:07.767 "subsystems": [ 00:15:07.767 { 00:15:07.767 "subsystem": "keyring", 00:15:07.767 "config": [ 00:15:07.767 { 00:15:07.767 "method": "keyring_file_add_key", 00:15:07.767 "params": { 00:15:07.767 "name": "key0", 00:15:07.767 "path": "/tmp/tmp.XgfR7rpDIq" 00:15:07.767 } 00:15:07.767 } 00:15:07.767 ] 00:15:07.767 }, 00:15:07.767 { 00:15:07.767 "subsystem": "iobuf", 00:15:07.767 "config": [ 00:15:07.767 { 00:15:07.767 "method": "iobuf_set_options", 00:15:07.767 "params": { 00:15:07.767 "small_pool_count": 8192, 00:15:07.767 "large_pool_count": 1024, 00:15:07.767 "small_bufsize": 8192, 00:15:07.767 "large_bufsize": 135168 00:15:07.767 } 00:15:07.767 } 00:15:07.767 ] 00:15:07.767 }, 00:15:07.767 { 00:15:07.767 "subsystem": "sock", 00:15:07.767 "config": [ 00:15:07.767 { 00:15:07.767 "method": "sock_set_default_impl", 00:15:07.767 "params": { 00:15:07.767 "impl_name": "uring" 00:15:07.767 } 00:15:07.767 }, 00:15:07.767 { 00:15:07.767 "method": "sock_impl_set_options", 00:15:07.767 "params": { 00:15:07.767 "impl_name": "ssl", 00:15:07.767 "recv_buf_size": 4096, 00:15:07.767 "send_buf_size": 4096, 00:15:07.767 "enable_recv_pipe": true, 00:15:07.767 "enable_quickack": false, 00:15:07.767 "enable_placement_id": 0, 00:15:07.767 "enable_zerocopy_send_server": true, 00:15:07.768 "enable_zerocopy_send_client": false, 00:15:07.768 "zerocopy_threshold": 0, 00:15:07.768 "tls_version": 0, 00:15:07.768 "enable_ktls": false 00:15:07.768 } 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "method": "sock_impl_set_options", 00:15:07.768 "params": { 00:15:07.768 "impl_name": "posix", 00:15:07.768 "recv_buf_size": 2097152, 00:15:07.768 "send_buf_size": 2097152, 00:15:07.768 "enable_recv_pipe": true, 00:15:07.768 "enable_quickack": false, 00:15:07.768 "enable_placement_id": 0, 00:15:07.768 "enable_zerocopy_send_server": true, 00:15:07.768 "enable_zerocopy_send_client": false, 00:15:07.768 "zerocopy_threshold": 0, 00:15:07.768 "tls_version": 0, 00:15:07.768 "enable_ktls": false 00:15:07.768 } 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "method": "sock_impl_set_options", 00:15:07.768 "params": { 00:15:07.768 "impl_name": "uring", 00:15:07.768 "recv_buf_size": 2097152, 00:15:07.768 "send_buf_size": 2097152, 00:15:07.768 "enable_recv_pipe": true, 00:15:07.768 "enable_quickack": false, 00:15:07.768 "enable_placement_id": 0, 00:15:07.768 "enable_zerocopy_send_server": false, 00:15:07.768 "enable_zerocopy_send_client": false, 00:15:07.768 "zerocopy_threshold": 0, 00:15:07.768 "tls_version": 0, 00:15:07.768 "enable_ktls": false 00:15:07.768 } 00:15:07.768 } 00:15:07.768 ] 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "subsystem": "vmd", 00:15:07.768 "config": [] 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "subsystem": "accel", 00:15:07.768 "config": [ 00:15:07.768 { 00:15:07.768 "method": "accel_set_options", 00:15:07.768 "params": { 00:15:07.768 "small_cache_size": 128, 00:15:07.768 "large_cache_size": 16, 00:15:07.768 "task_count": 2048, 00:15:07.768 "sequence_count": 2048, 00:15:07.768 "buf_count": 2048 00:15:07.768 } 00:15:07.768 } 00:15:07.768 ] 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "subsystem": "bdev", 00:15:07.768 "config": [ 00:15:07.768 { 00:15:07.768 "method": "bdev_set_options", 00:15:07.768 "params": { 00:15:07.768 "bdev_io_pool_size": 65535, 00:15:07.768 "bdev_io_cache_size": 256, 00:15:07.768 "bdev_auto_examine": true, 00:15:07.768 "iobuf_small_cache_size": 128, 00:15:07.768 "iobuf_large_cache_size": 16 00:15:07.768 } 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "method": "bdev_raid_set_options", 00:15:07.768 "params": { 00:15:07.768 "process_window_size_kb": 1024 00:15:07.768 } 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "method": "bdev_iscsi_set_options", 00:15:07.768 "params": { 00:15:07.768 "timeout_sec": 30 00:15:07.768 } 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "method": "bdev_nvme_set_options", 00:15:07.768 "params": { 00:15:07.768 "action_on_timeout": "none", 00:15:07.768 "timeout_us": 0, 00:15:07.768 "timeout_admin_us": 0, 00:15:07.768 "keep_alive_timeout_ms": 10000, 00:15:07.768 "arbitration_burst": 0, 00:15:07.768 "low_priority_weight": 0, 00:15:07.768 "medium_priority_weight": 0, 00:15:07.768 "high_priority_weight": 0, 00:15:07.768 "nvme_adminq_poll_period_us": 10000, 00:15:07.768 "nvme_ioq_poll_period_us": 0, 00:15:07.768 "io_queue_requests": 512, 00:15:07.768 "delay_cmd_submit": true, 00:15:07.768 "transport_retry_count": 4, 00:15:07.768 "bdev_retry_count": 3, 00:15:07.768 "transport_ack_timeout": 0, 00:15:07.768 "ctrlr_loss_timeout_seWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.768 17:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.768 17:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.768 17:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.768 c": 0, 00:15:07.768 "reconnect_delay_sec": 0, 00:15:07.768 "fast_io_fail_timeout_sec": 0, 00:15:07.768 "disable_auto_failback": false, 00:15:07.768 "generate_uuids": false, 00:15:07.768 "transport_tos": 0, 00:15:07.768 "nvme_error_stat": false, 00:15:07.768 "rdma_srq_size": 0, 00:15:07.768 "io_path_stat": false, 00:15:07.768 "allow_accel_sequence": false, 00:15:07.768 "rdma_max_cq_size": 0, 00:15:07.768 "rdma_cm_event_timeout_ms": 0, 00:15:07.768 "dhchap_digests": [ 00:15:07.768 "sha256", 00:15:07.768 "sha384", 00:15:07.768 "sha512" 00:15:07.768 ], 00:15:07.768 "dhchap_dhgroups": [ 00:15:07.768 "null", 00:15:07.768 "ffdhe2048", 00:15:07.768 "ffdhe3072", 00:15:07.768 "ffdhe4096", 00:15:07.768 "ffdhe6144", 00:15:07.768 "ffdhe8192" 00:15:07.768 ] 00:15:07.768 } 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "method": "bdev_nvme_attach_controller", 00:15:07.768 "params": { 00:15:07.768 "name": "nvme0", 00:15:07.768 "trtype": "TCP", 00:15:07.768 "adrfam": "IPv4", 00:15:07.768 "traddr": "10.0.0.2", 00:15:07.768 "trsvcid": "4420", 00:15:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.768 "prchk_reftag": false, 00:15:07.768 "prchk_guard": false, 00:15:07.768 "ctrlr_loss_timeout_sec": 0, 00:15:07.768 "reconnect_delay_sec": 0, 00:15:07.768 "fast_io_fail_timeout_sec": 0, 00:15:07.768 "psk": "key0", 00:15:07.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.768 "hdgst": false, 00:15:07.768 "ddgst": false 00:15:07.768 } 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "method": "bdev_nvme_set_hotplug", 00:15:07.768 "params": { 00:15:07.768 "period_us": 100000, 00:15:07.768 "enable": false 00:15:07.768 } 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "method": "bdev_enable_histogram", 00:15:07.768 "params": { 00:15:07.768 "name": "nvme0n1", 00:15:07.768 "enable": true 00:15:07.768 } 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "method": "bdev_wait_for_examine" 00:15:07.768 } 00:15:07.768 ] 00:15:07.768 }, 00:15:07.768 { 00:15:07.768 "subsystem": "nbd", 00:15:07.768 "config": [] 00:15:07.768 } 00:15:07.768 ] 00:15:07.768 }' 00:15:07.768 17:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.027 [2024-07-15 17:03:58.066690] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:08.027 [2024-07-15 17:03:58.066982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74041 ] 00:15:08.027 [2024-07-15 17:03:58.204096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.027 [2024-07-15 17:03:58.314411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.286 [2024-07-15 17:03:58.449470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:08.286 [2024-07-15 17:03:58.494685] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.854 17:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.854 17:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:08.854 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:08.854 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:15:09.113 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.113 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:09.371 Running I/O for 1 seconds... 00:15:10.307 00:15:10.307 Latency(us) 00:15:10.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.307 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:10.307 Verification LBA range: start 0x0 length 0x2000 00:15:10.307 nvme0n1 : 1.03 3981.33 15.55 0.00 0.00 31806.16 8340.95 20971.52 00:15:10.307 =================================================================================================================== 00:15:10.307 Total : 3981.33 15.55 0.00 0.00 31806.16 8340.95 20971.52 00:15:10.307 0 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:10.307 nvmf_trace.0 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74041 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74041 ']' 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74041 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.307 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74041 00:15:10.565 killing process with pid 74041 00:15:10.565 Received shutdown signal, test time was about 1.000000 seconds 00:15:10.565 00:15:10.565 Latency(us) 00:15:10.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.565 =================================================================================================================== 00:15:10.565 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.565 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:10.565 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:10.565 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74041' 00:15:10.565 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74041 00:15:10.565 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74041 00:15:10.565 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:10.565 17:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:10.565 17:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:10.823 17:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.823 17:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:10.823 17:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.823 17:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.823 rmmod nvme_tcp 00:15:10.823 rmmod nvme_fabrics 00:15:10.823 rmmod nvme_keyring 00:15:10.823 17:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:10.823 17:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:10.823 17:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:10.823 17:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74009 ']' 00:15:10.823 17:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74009 00:15:10.824 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74009 ']' 00:15:10.824 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74009 00:15:10.824 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:10.824 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.824 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74009 00:15:10.824 killing process with pid 74009 00:15:10.824 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:10.824 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:10.824 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74009' 00:15:10.824 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74009 00:15:10.824 17:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74009 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.4gILJCvSLX /tmp/tmp.FpPoQhgEos /tmp/tmp.XgfR7rpDIq 00:15:11.082 ************************************ 00:15:11.082 END TEST nvmf_tls 00:15:11.082 ************************************ 00:15:11.082 00:15:11.082 real 1m26.660s 00:15:11.082 user 2m18.501s 00:15:11.082 sys 0m27.873s 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:11.082 17:04:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.082 17:04:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:11.082 17:04:01 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:11.082 17:04:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:11.082 17:04:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.082 17:04:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:11.082 ************************************ 00:15:11.082 START TEST nvmf_fips 00:15:11.082 ************************************ 00:15:11.082 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:11.082 * Looking for test storage... 00:15:11.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:11.342 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:11.343 Error setting digest 00:15:11.343 00A2E3BDB87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:11.343 00A2E3BDB87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:11.343 Cannot find device "nvmf_tgt_br" 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.343 Cannot find device "nvmf_tgt_br2" 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:11.343 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:11.343 Cannot find device "nvmf_tgt_br" 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:11.603 Cannot find device "nvmf_tgt_br2" 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:11.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:11.603 00:15:11.603 --- 10.0.0.2 ping statistics --- 00:15:11.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.603 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:11.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:11.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:15:11.603 00:15:11.603 --- 10.0.0.3 ping statistics --- 00:15:11.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.603 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:11.603 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:11.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:15:11.861 00:15:11.861 --- 10.0.0.1 ping statistics --- 00:15:11.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.861 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74310 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74310 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74310 ']' 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.861 17:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:11.861 [2024-07-15 17:04:02.005731] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:11.861 [2024-07-15 17:04:02.005832] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.861 [2024-07-15 17:04:02.144683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.119 [2024-07-15 17:04:02.274853] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.119 [2024-07-15 17:04:02.274907] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.119 [2024-07-15 17:04:02.274921] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.119 [2024-07-15 17:04:02.274932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.119 [2024-07-15 17:04:02.274941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.119 [2024-07-15 17:04:02.274972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.119 [2024-07-15 17:04:02.332492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:13.054 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.054 [2024-07-15 17:04:03.279239] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.054 [2024-07-15 17:04:03.295178] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:13.054 [2024-07-15 17:04:03.295346] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.054 [2024-07-15 17:04:03.326348] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:13.054 malloc0 00:15:13.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.313 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.313 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74351 00:15:13.313 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74351 /var/tmp/bdevperf.sock 00:15:13.313 17:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:13.313 17:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74351 ']' 00:15:13.313 17:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.313 17:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.313 17:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.313 17:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.313 17:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:13.313 [2024-07-15 17:04:03.437328] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:13.313 [2024-07-15 17:04:03.437467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74351 ] 00:15:13.313 [2024-07-15 17:04:03.579225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.571 [2024-07-15 17:04:03.741380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.571 [2024-07-15 17:04:03.814603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:14.139 17:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.139 17:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:14.139 17:04:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:14.397 [2024-07-15 17:04:04.630522] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:14.397 [2024-07-15 17:04:04.630708] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:14.654 TLSTESTn1 00:15:14.654 17:04:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:14.654 Running I/O for 10 seconds... 00:15:24.662 00:15:24.662 Latency(us) 00:15:24.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.662 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:24.662 Verification LBA range: start 0x0 length 0x2000 00:15:24.662 TLSTESTn1 : 10.01 3939.08 15.39 0.00 0.00 32440.85 6017.40 41943.04 00:15:24.662 =================================================================================================================== 00:15:24.662 Total : 3939.08 15.39 0.00 0.00 32440.85 6017.40 41943.04 00:15:24.662 0 00:15:24.662 17:04:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:24.662 17:04:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:24.662 17:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:15:24.662 17:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:15:24.662 17:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:24.662 17:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:24.662 17:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:24.662 17:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:24.662 17:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:24.662 17:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:24.662 nvmf_trace.0 00:15:24.921 17:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:15:24.921 17:04:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74351 00:15:24.921 17:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74351 ']' 00:15:24.921 17:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74351 00:15:24.921 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:24.921 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.921 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74351 00:15:24.921 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:24.921 killing process with pid 74351 00:15:24.921 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.921 00:15:24.921 Latency(us) 00:15:24.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.921 =================================================================================================================== 00:15:24.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.921 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:24.921 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74351' 00:15:24.921 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74351 00:15:24.921 [2024-07-15 17:04:15.024498] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:24.921 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74351 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.181 rmmod nvme_tcp 00:15:25.181 rmmod nvme_fabrics 00:15:25.181 rmmod nvme_keyring 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74310 ']' 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74310 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74310 ']' 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74310 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74310 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:25.181 killing process with pid 74310 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74310' 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74310 00:15:25.181 [2024-07-15 17:04:15.375480] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:25.181 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74310 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:25.441 00:15:25.441 real 0m14.376s 00:15:25.441 user 0m19.989s 00:15:25.441 sys 0m5.555s 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:25.441 17:04:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:25.441 ************************************ 00:15:25.441 END TEST nvmf_fips 00:15:25.441 ************************************ 00:15:25.441 17:04:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:25.441 17:04:15 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:15:25.441 17:04:15 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:15:25.441 17:04:15 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:25.441 17:04:15 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:25.441 17:04:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.700 17:04:15 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:25.700 17:04:15 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:25.700 17:04:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.700 17:04:15 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:25.700 17:04:15 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:25.700 17:04:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:25.700 17:04:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.700 17:04:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.700 ************************************ 00:15:25.700 START TEST nvmf_identify 00:15:25.700 ************************************ 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:25.700 * Looking for test storage... 00:15:25.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.700 17:04:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:25.701 Cannot find device "nvmf_tgt_br" 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.701 Cannot find device "nvmf_tgt_br2" 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:25.701 Cannot find device "nvmf_tgt_br" 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:25.701 Cannot find device "nvmf_tgt_br2" 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:25.701 17:04:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.960 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:25.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:15:25.961 00:15:25.961 --- 10.0.0.2 ping statistics --- 00:15:25.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.961 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:25.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:15:25.961 00:15:25.961 --- 10.0.0.3 ping statistics --- 00:15:25.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.961 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:25.961 00:15:25.961 --- 10.0.0.1 ping statistics --- 00:15:25.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.961 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:25.961 17:04:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74698 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74698 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74698 ']' 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.220 17:04:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:26.220 [2024-07-15 17:04:16.348180] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:26.220 [2024-07-15 17:04:16.348855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.220 [2024-07-15 17:04:16.495881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:26.479 [2024-07-15 17:04:16.669242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.479 [2024-07-15 17:04:16.669625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.479 [2024-07-15 17:04:16.669866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.479 [2024-07-15 17:04:16.670017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.479 [2024-07-15 17:04:16.670136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.479 [2024-07-15 17:04:16.670351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.479 [2024-07-15 17:04:16.670513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.479 [2024-07-15 17:04:16.670692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.479 [2024-07-15 17:04:16.670570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.479 [2024-07-15 17:04:16.745876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:27.101 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.101 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:15:27.101 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:27.101 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.101 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.101 [2024-07-15 17:04:17.372252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.361 Malloc0 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.361 [2024-07-15 17:04:17.487990] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.361 [ 00:15:27.361 { 00:15:27.361 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:27.361 "subtype": "Discovery", 00:15:27.361 "listen_addresses": [ 00:15:27.361 { 00:15:27.361 "trtype": "TCP", 00:15:27.361 "adrfam": "IPv4", 00:15:27.361 "traddr": "10.0.0.2", 00:15:27.361 "trsvcid": "4420" 00:15:27.361 } 00:15:27.361 ], 00:15:27.361 "allow_any_host": true, 00:15:27.361 "hosts": [] 00:15:27.361 }, 00:15:27.361 { 00:15:27.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.361 "subtype": "NVMe", 00:15:27.361 "listen_addresses": [ 00:15:27.361 { 00:15:27.361 "trtype": "TCP", 00:15:27.361 "adrfam": "IPv4", 00:15:27.361 "traddr": "10.0.0.2", 00:15:27.361 "trsvcid": "4420" 00:15:27.361 } 00:15:27.361 ], 00:15:27.361 "allow_any_host": true, 00:15:27.361 "hosts": [], 00:15:27.361 "serial_number": "SPDK00000000000001", 00:15:27.361 "model_number": "SPDK bdev Controller", 00:15:27.361 "max_namespaces": 32, 00:15:27.361 "min_cntlid": 1, 00:15:27.361 "max_cntlid": 65519, 00:15:27.361 "namespaces": [ 00:15:27.361 { 00:15:27.361 "nsid": 1, 00:15:27.361 "bdev_name": "Malloc0", 00:15:27.361 "name": "Malloc0", 00:15:27.361 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:27.361 "eui64": "ABCDEF0123456789", 00:15:27.361 "uuid": "fabe60b2-9e3e-4845-a673-22357cb73efa" 00:15:27.361 } 00:15:27.361 ] 00:15:27.361 } 00:15:27.361 ] 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.361 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:27.361 [2024-07-15 17:04:17.542726] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:27.361 [2024-07-15 17:04:17.542781] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74739 ] 00:15:27.625 [2024-07-15 17:04:17.686111] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:27.625 [2024-07-15 17:04:17.686196] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:27.625 [2024-07-15 17:04:17.686205] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:27.625 [2024-07-15 17:04:17.686221] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:27.625 [2024-07-15 17:04:17.686231] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:27.625 [2024-07-15 17:04:17.686432] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:27.625 [2024-07-15 17:04:17.686499] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc262c0 0 00:15:27.625 [2024-07-15 17:04:17.692821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:27.625 [2024-07-15 17:04:17.692847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:27.625 [2024-07-15 17:04:17.692854] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:27.625 [2024-07-15 17:04:17.692858] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:27.625 [2024-07-15 17:04:17.692915] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.692924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.692929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc262c0) 00:15:27.625 [2024-07-15 17:04:17.692946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:27.625 [2024-07-15 17:04:17.692980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67940, cid 0, qid 0 00:15:27.625 [2024-07-15 17:04:17.701388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.625 [2024-07-15 17:04:17.701410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.625 [2024-07-15 17:04:17.701415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67940) on tqpair=0xc262c0 00:15:27.625 [2024-07-15 17:04:17.701439] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:27.625 [2024-07-15 17:04:17.701449] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:27.625 [2024-07-15 17:04:17.701457] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:27.625 [2024-07-15 17:04:17.701478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc262c0) 00:15:27.625 [2024-07-15 17:04:17.701499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.625 [2024-07-15 17:04:17.701529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67940, cid 0, qid 0 00:15:27.625 [2024-07-15 17:04:17.701653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.625 [2024-07-15 17:04:17.701661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.625 [2024-07-15 17:04:17.701665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67940) on tqpair=0xc262c0 00:15:27.625 [2024-07-15 17:04:17.701676] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:27.625 [2024-07-15 17:04:17.701684] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:27.625 [2024-07-15 17:04:17.701692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc262c0) 00:15:27.625 [2024-07-15 17:04:17.701709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.625 [2024-07-15 17:04:17.701730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67940, cid 0, qid 0 00:15:27.625 [2024-07-15 17:04:17.701798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.625 [2024-07-15 17:04:17.701805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.625 [2024-07-15 17:04:17.701809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67940) on tqpair=0xc262c0 00:15:27.625 [2024-07-15 17:04:17.701822] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:27.625 [2024-07-15 17:04:17.701831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:27.625 [2024-07-15 17:04:17.701839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc262c0) 00:15:27.625 [2024-07-15 17:04:17.701855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.625 [2024-07-15 17:04:17.701875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67940, cid 0, qid 0 00:15:27.625 [2024-07-15 17:04:17.701939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.625 [2024-07-15 17:04:17.701946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.625 [2024-07-15 17:04:17.701950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67940) on tqpair=0xc262c0 00:15:27.625 [2024-07-15 17:04:17.701961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:27.625 [2024-07-15 17:04:17.701972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.701981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc262c0) 00:15:27.625 [2024-07-15 17:04:17.701989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.625 [2024-07-15 17:04:17.702008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67940, cid 0, qid 0 00:15:27.625 [2024-07-15 17:04:17.702068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.625 [2024-07-15 17:04:17.702075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.625 [2024-07-15 17:04:17.702079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.625 [2024-07-15 17:04:17.702084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67940) on tqpair=0xc262c0 00:15:27.625 [2024-07-15 17:04:17.702090] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:27.626 [2024-07-15 17:04:17.702096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:27.626 [2024-07-15 17:04:17.702104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:27.626 [2024-07-15 17:04:17.702211] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:27.626 [2024-07-15 17:04:17.702224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:27.626 [2024-07-15 17:04:17.702236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc262c0) 00:15:27.626 [2024-07-15 17:04:17.702253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.626 [2024-07-15 17:04:17.702274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67940, cid 0, qid 0 00:15:27.626 [2024-07-15 17:04:17.702329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.626 [2024-07-15 17:04:17.702337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.626 [2024-07-15 17:04:17.702341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67940) on tqpair=0xc262c0 00:15:27.626 [2024-07-15 17:04:17.702351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:27.626 [2024-07-15 17:04:17.702384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc262c0) 00:15:27.626 [2024-07-15 17:04:17.702403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.626 [2024-07-15 17:04:17.702425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67940, cid 0, qid 0 00:15:27.626 [2024-07-15 17:04:17.702492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.626 [2024-07-15 17:04:17.702499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.626 [2024-07-15 17:04:17.702503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67940) on tqpair=0xc262c0 00:15:27.626 [2024-07-15 17:04:17.702513] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:27.626 [2024-07-15 17:04:17.702519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:27.626 [2024-07-15 17:04:17.702527] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:27.626 [2024-07-15 17:04:17.702539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:27.626 [2024-07-15 17:04:17.702552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc262c0) 00:15:27.626 [2024-07-15 17:04:17.702565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.626 [2024-07-15 17:04:17.702586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67940, cid 0, qid 0 00:15:27.626 [2024-07-15 17:04:17.702700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.626 [2024-07-15 17:04:17.702707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.626 [2024-07-15 17:04:17.702711] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702716] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc262c0): datao=0, datal=4096, cccid=0 00:15:27.626 [2024-07-15 17:04:17.702721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc67940) on tqpair(0xc262c0): expected_datao=0, payload_size=4096 00:15:27.626 [2024-07-15 17:04:17.702727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702736] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702741] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.626 [2024-07-15 17:04:17.702757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.626 [2024-07-15 17:04:17.702761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67940) on tqpair=0xc262c0 00:15:27.626 [2024-07-15 17:04:17.702776] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:27.626 [2024-07-15 17:04:17.702782] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:27.626 [2024-07-15 17:04:17.702787] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:27.626 [2024-07-15 17:04:17.702794] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:27.626 [2024-07-15 17:04:17.702799] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:27.626 [2024-07-15 17:04:17.702804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:27.626 [2024-07-15 17:04:17.702814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:27.626 [2024-07-15 17:04:17.702823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc262c0) 00:15:27.626 [2024-07-15 17:04:17.702840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.626 [2024-07-15 17:04:17.702861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67940, cid 0, qid 0 00:15:27.626 [2024-07-15 17:04:17.702939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.626 [2024-07-15 17:04:17.702952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.626 [2024-07-15 17:04:17.702957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67940) on tqpair=0xc262c0 00:15:27.626 [2024-07-15 17:04:17.702971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702976] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc262c0) 00:15:27.626 [2024-07-15 17:04:17.702987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.626 [2024-07-15 17:04:17.702995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.702999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.703003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc262c0) 00:15:27.626 [2024-07-15 17:04:17.703010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.626 [2024-07-15 17:04:17.703017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.703022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.703026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc262c0) 00:15:27.626 [2024-07-15 17:04:17.703032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.626 [2024-07-15 17:04:17.703039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.703043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.703047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc262c0) 00:15:27.626 [2024-07-15 17:04:17.703053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.626 [2024-07-15 17:04:17.703059] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:27.626 [2024-07-15 17:04:17.703076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:27.626 [2024-07-15 17:04:17.703084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.703089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc262c0) 00:15:27.626 [2024-07-15 17:04:17.703096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.626 [2024-07-15 17:04:17.703119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67940, cid 0, qid 0 00:15:27.626 [2024-07-15 17:04:17.703127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67ac0, cid 1, qid 0 00:15:27.626 [2024-07-15 17:04:17.703132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67c40, cid 2, qid 0 00:15:27.626 [2024-07-15 17:04:17.703137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67dc0, cid 3, qid 0 00:15:27.626 [2024-07-15 17:04:17.703141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67f40, cid 4, qid 0 00:15:27.626 [2024-07-15 17:04:17.703240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.626 [2024-07-15 17:04:17.703247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.626 [2024-07-15 17:04:17.703251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.703255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67f40) on tqpair=0xc262c0 00:15:27.626 [2024-07-15 17:04:17.703262] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:27.626 [2024-07-15 17:04:17.703272] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:27.626 [2024-07-15 17:04:17.703286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.703291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc262c0) 00:15:27.626 [2024-07-15 17:04:17.703299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.626 [2024-07-15 17:04:17.703320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67f40, cid 4, qid 0 00:15:27.626 [2024-07-15 17:04:17.703418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.626 [2024-07-15 17:04:17.703427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.626 [2024-07-15 17:04:17.703431] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.703435] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc262c0): datao=0, datal=4096, cccid=4 00:15:27.626 [2024-07-15 17:04:17.703440] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc67f40) on tqpair(0xc262c0): expected_datao=0, payload_size=4096 00:15:27.626 [2024-07-15 17:04:17.703445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.703453] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.626 [2024-07-15 17:04:17.703457] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.627 [2024-07-15 17:04:17.703483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.627 [2024-07-15 17:04:17.703487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67f40) on tqpair=0xc262c0 00:15:27.627 [2024-07-15 17:04:17.703507] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:27.627 [2024-07-15 17:04:17.703546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc262c0) 00:15:27.627 [2024-07-15 17:04:17.703562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.627 [2024-07-15 17:04:17.703570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc262c0) 00:15:27.627 [2024-07-15 17:04:17.703585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.627 [2024-07-15 17:04:17.703614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67f40, cid 4, qid 0 00:15:27.627 [2024-07-15 17:04:17.703623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc680c0, cid 5, qid 0 00:15:27.627 [2024-07-15 17:04:17.703755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.627 [2024-07-15 17:04:17.703770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.627 [2024-07-15 17:04:17.703776] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703780] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc262c0): datao=0, datal=1024, cccid=4 00:15:27.627 [2024-07-15 17:04:17.703785] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc67f40) on tqpair(0xc262c0): expected_datao=0, payload_size=1024 00:15:27.627 [2024-07-15 17:04:17.703791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703798] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703802] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.627 [2024-07-15 17:04:17.703815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.627 [2024-07-15 17:04:17.703819] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc680c0) on tqpair=0xc262c0 00:15:27.627 [2024-07-15 17:04:17.703845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.627 [2024-07-15 17:04:17.703854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.627 [2024-07-15 17:04:17.703858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67f40) on tqpair=0xc262c0 00:15:27.627 [2024-07-15 17:04:17.703877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.703882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc262c0) 00:15:27.627 [2024-07-15 17:04:17.703890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.627 [2024-07-15 17:04:17.703918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67f40, cid 4, qid 0 00:15:27.627 [2024-07-15 17:04:17.704000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.627 [2024-07-15 17:04:17.704007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.627 [2024-07-15 17:04:17.704011] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.704015] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc262c0): datao=0, datal=3072, cccid=4 00:15:27.627 [2024-07-15 17:04:17.704020] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc67f40) on tqpair(0xc262c0): expected_datao=0, payload_size=3072 00:15:27.627 [2024-07-15 17:04:17.704025] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.704033] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.704037] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.704046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.627 [2024-07-15 17:04:17.704052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.627 [2024-07-15 17:04:17.704056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.704061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67f40) on tqpair=0xc262c0 00:15:27.627 [2024-07-15 17:04:17.704072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.704077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc262c0) 00:15:27.627 [2024-07-15 17:04:17.704085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.627 [2024-07-15 17:04:17.704110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67f40, cid 4, qid 0 00:15:27.627 [2024-07-15 17:04:17.704180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.627 [2024-07-15 17:04:17.704187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.627 [2024-07-15 17:04:17.704191] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.704195] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc262c0): datao=0, datal=8, cccid=4 00:15:27.627 [2024-07-15 17:04:17.704200] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc67f40) on tqpair(0xc262c0): expected_datao=0, payload_size=8 00:15:27.627 [2024-07-15 17:04:17.704205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.704212] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.704216] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.704234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.627 [2024-07-15 17:04:17.704242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.627 [2024-07-15 17:04:17.704245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.627 [2024-07-15 17:04:17.704250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67f40) on tqpair=0xc262c0 00:15:27.627 ===================================================== 00:15:27.627 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:27.627 ===================================================== 00:15:27.627 Controller Capabilities/Features 00:15:27.627 ================================ 00:15:27.627 Vendor ID: 0000 00:15:27.627 Subsystem Vendor ID: 0000 00:15:27.627 Serial Number: .................... 00:15:27.627 Model Number: ........................................ 00:15:27.627 Firmware Version: 24.09 00:15:27.627 Recommended Arb Burst: 0 00:15:27.627 IEEE OUI Identifier: 00 00 00 00:15:27.627 Multi-path I/O 00:15:27.627 May have multiple subsystem ports: No 00:15:27.627 May have multiple controllers: No 00:15:27.627 Associated with SR-IOV VF: No 00:15:27.627 Max Data Transfer Size: 131072 00:15:27.627 Max Number of Namespaces: 0 00:15:27.627 Max Number of I/O Queues: 1024 00:15:27.627 NVMe Specification Version (VS): 1.3 00:15:27.627 NVMe Specification Version (Identify): 1.3 00:15:27.627 Maximum Queue Entries: 128 00:15:27.627 Contiguous Queues Required: Yes 00:15:27.627 Arbitration Mechanisms Supported 00:15:27.627 Weighted Round Robin: Not Supported 00:15:27.627 Vendor Specific: Not Supported 00:15:27.627 Reset Timeout: 15000 ms 00:15:27.627 Doorbell Stride: 4 bytes 00:15:27.627 NVM Subsystem Reset: Not Supported 00:15:27.627 Command Sets Supported 00:15:27.627 NVM Command Set: Supported 00:15:27.627 Boot Partition: Not Supported 00:15:27.627 Memory Page Size Minimum: 4096 bytes 00:15:27.627 Memory Page Size Maximum: 4096 bytes 00:15:27.627 Persistent Memory Region: Not Supported 00:15:27.627 Optional Asynchronous Events Supported 00:15:27.627 Namespace Attribute Notices: Not Supported 00:15:27.627 Firmware Activation Notices: Not Supported 00:15:27.627 ANA Change Notices: Not Supported 00:15:27.627 PLE Aggregate Log Change Notices: Not Supported 00:15:27.627 LBA Status Info Alert Notices: Not Supported 00:15:27.627 EGE Aggregate Log Change Notices: Not Supported 00:15:27.627 Normal NVM Subsystem Shutdown event: Not Supported 00:15:27.627 Zone Descriptor Change Notices: Not Supported 00:15:27.627 Discovery Log Change Notices: Supported 00:15:27.627 Controller Attributes 00:15:27.627 128-bit Host Identifier: Not Supported 00:15:27.627 Non-Operational Permissive Mode: Not Supported 00:15:27.627 NVM Sets: Not Supported 00:15:27.627 Read Recovery Levels: Not Supported 00:15:27.627 Endurance Groups: Not Supported 00:15:27.627 Predictable Latency Mode: Not Supported 00:15:27.627 Traffic Based Keep ALive: Not Supported 00:15:27.627 Namespace Granularity: Not Supported 00:15:27.627 SQ Associations: Not Supported 00:15:27.627 UUID List: Not Supported 00:15:27.627 Multi-Domain Subsystem: Not Supported 00:15:27.627 Fixed Capacity Management: Not Supported 00:15:27.627 Variable Capacity Management: Not Supported 00:15:27.627 Delete Endurance Group: Not Supported 00:15:27.627 Delete NVM Set: Not Supported 00:15:27.627 Extended LBA Formats Supported: Not Supported 00:15:27.627 Flexible Data Placement Supported: Not Supported 00:15:27.627 00:15:27.627 Controller Memory Buffer Support 00:15:27.627 ================================ 00:15:27.627 Supported: No 00:15:27.627 00:15:27.627 Persistent Memory Region Support 00:15:27.627 ================================ 00:15:27.627 Supported: No 00:15:27.627 00:15:27.627 Admin Command Set Attributes 00:15:27.627 ============================ 00:15:27.627 Security Send/Receive: Not Supported 00:15:27.627 Format NVM: Not Supported 00:15:27.627 Firmware Activate/Download: Not Supported 00:15:27.627 Namespace Management: Not Supported 00:15:27.627 Device Self-Test: Not Supported 00:15:27.627 Directives: Not Supported 00:15:27.627 NVMe-MI: Not Supported 00:15:27.627 Virtualization Management: Not Supported 00:15:27.627 Doorbell Buffer Config: Not Supported 00:15:27.627 Get LBA Status Capability: Not Supported 00:15:27.627 Command & Feature Lockdown Capability: Not Supported 00:15:27.627 Abort Command Limit: 1 00:15:27.627 Async Event Request Limit: 4 00:15:27.627 Number of Firmware Slots: N/A 00:15:27.627 Firmware Slot 1 Read-Only: N/A 00:15:27.627 Firmware Activation Without Reset: N/A 00:15:27.628 Multiple Update Detection Support: N/A 00:15:27.628 Firmware Update Granularity: No Information Provided 00:15:27.628 Per-Namespace SMART Log: No 00:15:27.628 Asymmetric Namespace Access Log Page: Not Supported 00:15:27.628 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:27.628 Command Effects Log Page: Not Supported 00:15:27.628 Get Log Page Extended Data: Supported 00:15:27.628 Telemetry Log Pages: Not Supported 00:15:27.628 Persistent Event Log Pages: Not Supported 00:15:27.628 Supported Log Pages Log Page: May Support 00:15:27.628 Commands Supported & Effects Log Page: Not Supported 00:15:27.628 Feature Identifiers & Effects Log Page:May Support 00:15:27.628 NVMe-MI Commands & Effects Log Page: May Support 00:15:27.628 Data Area 4 for Telemetry Log: Not Supported 00:15:27.628 Error Log Page Entries Supported: 128 00:15:27.628 Keep Alive: Not Supported 00:15:27.628 00:15:27.628 NVM Command Set Attributes 00:15:27.628 ========================== 00:15:27.628 Submission Queue Entry Size 00:15:27.628 Max: 1 00:15:27.628 Min: 1 00:15:27.628 Completion Queue Entry Size 00:15:27.628 Max: 1 00:15:27.628 Min: 1 00:15:27.628 Number of Namespaces: 0 00:15:27.628 Compare Command: Not Supported 00:15:27.628 Write Uncorrectable Command: Not Supported 00:15:27.628 Dataset Management Command: Not Supported 00:15:27.628 Write Zeroes Command: Not Supported 00:15:27.628 Set Features Save Field: Not Supported 00:15:27.628 Reservations: Not Supported 00:15:27.628 Timestamp: Not Supported 00:15:27.628 Copy: Not Supported 00:15:27.628 Volatile Write Cache: Not Present 00:15:27.628 Atomic Write Unit (Normal): 1 00:15:27.628 Atomic Write Unit (PFail): 1 00:15:27.628 Atomic Compare & Write Unit: 1 00:15:27.628 Fused Compare & Write: Supported 00:15:27.628 Scatter-Gather List 00:15:27.628 SGL Command Set: Supported 00:15:27.628 SGL Keyed: Supported 00:15:27.628 SGL Bit Bucket Descriptor: Not Supported 00:15:27.628 SGL Metadata Pointer: Not Supported 00:15:27.628 Oversized SGL: Not Supported 00:15:27.628 SGL Metadata Address: Not Supported 00:15:27.628 SGL Offset: Supported 00:15:27.628 Transport SGL Data Block: Not Supported 00:15:27.628 Replay Protected Memory Block: Not Supported 00:15:27.628 00:15:27.628 Firmware Slot Information 00:15:27.628 ========================= 00:15:27.628 Active slot: 0 00:15:27.628 00:15:27.628 00:15:27.628 Error Log 00:15:27.628 ========= 00:15:27.628 00:15:27.628 Active Namespaces 00:15:27.628 ================= 00:15:27.628 Discovery Log Page 00:15:27.628 ================== 00:15:27.628 Generation Counter: 2 00:15:27.628 Number of Records: 2 00:15:27.628 Record Format: 0 00:15:27.628 00:15:27.628 Discovery Log Entry 0 00:15:27.628 ---------------------- 00:15:27.628 Transport Type: 3 (TCP) 00:15:27.628 Address Family: 1 (IPv4) 00:15:27.628 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:27.628 Entry Flags: 00:15:27.628 Duplicate Returned Information: 1 00:15:27.628 Explicit Persistent Connection Support for Discovery: 1 00:15:27.628 Transport Requirements: 00:15:27.628 Secure Channel: Not Required 00:15:27.628 Port ID: 0 (0x0000) 00:15:27.628 Controller ID: 65535 (0xffff) 00:15:27.628 Admin Max SQ Size: 128 00:15:27.628 Transport Service Identifier: 4420 00:15:27.628 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:27.628 Transport Address: 10.0.0.2 00:15:27.628 Discovery Log Entry 1 00:15:27.628 ---------------------- 00:15:27.628 Transport Type: 3 (TCP) 00:15:27.628 Address Family: 1 (IPv4) 00:15:27.628 Subsystem Type: 2 (NVM Subsystem) 00:15:27.628 Entry Flags: 00:15:27.628 Duplicate Returned Information: 0 00:15:27.628 Explicit Persistent Connection Support for Discovery: 0 00:15:27.628 Transport Requirements: 00:15:27.628 Secure Channel: Not Required 00:15:27.628 Port ID: 0 (0x0000) 00:15:27.628 Controller ID: 65535 (0xffff) 00:15:27.628 Admin Max SQ Size: 128 00:15:27.628 Transport Service Identifier: 4420 00:15:27.628 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:27.628 Transport Address: 10.0.0.2 [2024-07-15 17:04:17.704410] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:27.628 [2024-07-15 17:04:17.704430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67940) on tqpair=0xc262c0 00:15:27.628 [2024-07-15 17:04:17.704438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.628 [2024-07-15 17:04:17.704444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67ac0) on tqpair=0xc262c0 00:15:27.628 [2024-07-15 17:04:17.704450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.628 [2024-07-15 17:04:17.704455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67c40) on tqpair=0xc262c0 00:15:27.628 [2024-07-15 17:04:17.704460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.628 [2024-07-15 17:04:17.704465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67dc0) on tqpair=0xc262c0 00:15:27.628 [2024-07-15 17:04:17.704470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.628 [2024-07-15 17:04:17.704481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.704486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.704490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc262c0) 00:15:27.628 [2024-07-15 17:04:17.704498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.628 [2024-07-15 17:04:17.704529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67dc0, cid 3, qid 0 00:15:27.628 [2024-07-15 17:04:17.704594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.628 [2024-07-15 17:04:17.704601] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.628 [2024-07-15 17:04:17.704606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.704610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67dc0) on tqpair=0xc262c0 00:15:27.628 [2024-07-15 17:04:17.704619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.704623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.704627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc262c0) 00:15:27.628 [2024-07-15 17:04:17.704635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.628 [2024-07-15 17:04:17.704659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67dc0, cid 3, qid 0 00:15:27.628 [2024-07-15 17:04:17.704745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.628 [2024-07-15 17:04:17.704752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.628 [2024-07-15 17:04:17.704756] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.704761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67dc0) on tqpair=0xc262c0 00:15:27.628 [2024-07-15 17:04:17.704766] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:27.628 [2024-07-15 17:04:17.704772] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:27.628 [2024-07-15 17:04:17.704783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.704788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.704793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc262c0) 00:15:27.628 [2024-07-15 17:04:17.704801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.628 [2024-07-15 17:04:17.704820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67dc0, cid 3, qid 0 00:15:27.628 [2024-07-15 17:04:17.704882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.628 [2024-07-15 17:04:17.704896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.628 [2024-07-15 17:04:17.704901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.704906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67dc0) on tqpair=0xc262c0 00:15:27.628 [2024-07-15 17:04:17.704918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.704923] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.704928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc262c0) 00:15:27.628 [2024-07-15 17:04:17.704935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.628 [2024-07-15 17:04:17.704955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67dc0, cid 3, qid 0 00:15:27.628 [2024-07-15 17:04:17.705010] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.628 [2024-07-15 17:04:17.705017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.628 [2024-07-15 17:04:17.705022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.705026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67dc0) on tqpair=0xc262c0 00:15:27.628 [2024-07-15 17:04:17.705038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.705043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.705047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc262c0) 00:15:27.628 [2024-07-15 17:04:17.705054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.628 [2024-07-15 17:04:17.705074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67dc0, cid 3, qid 0 00:15:27.628 [2024-07-15 17:04:17.705133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.628 [2024-07-15 17:04:17.705147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.628 [2024-07-15 17:04:17.705152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.705157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67dc0) on tqpair=0xc262c0 00:15:27.628 [2024-07-15 17:04:17.705168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.705173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.628 [2024-07-15 17:04:17.705177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc262c0) 00:15:27.629 [2024-07-15 17:04:17.705185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.629 [2024-07-15 17:04:17.705205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67dc0, cid 3, qid 0 00:15:27.629 [2024-07-15 17:04:17.705255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.629 [2024-07-15 17:04:17.705267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.629 [2024-07-15 17:04:17.705271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.705276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67dc0) on tqpair=0xc262c0 00:15:27.629 [2024-07-15 17:04:17.705287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.705292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.705296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc262c0) 00:15:27.629 [2024-07-15 17:04:17.705304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.629 [2024-07-15 17:04:17.705324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67dc0, cid 3, qid 0 00:15:27.629 [2024-07-15 17:04:17.709374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.629 [2024-07-15 17:04:17.709393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.629 [2024-07-15 17:04:17.709398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.709403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67dc0) on tqpair=0xc262c0 00:15:27.629 [2024-07-15 17:04:17.709418] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.709423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.709428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc262c0) 00:15:27.629 [2024-07-15 17:04:17.709437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.629 [2024-07-15 17:04:17.709463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc67dc0, cid 3, qid 0 00:15:27.629 [2024-07-15 17:04:17.709522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.629 [2024-07-15 17:04:17.709529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.629 [2024-07-15 17:04:17.709533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.709538] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc67dc0) on tqpair=0xc262c0 00:15:27.629 [2024-07-15 17:04:17.709546] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:15:27.629 00:15:27.629 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:27.629 [2024-07-15 17:04:17.754507] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:27.629 [2024-07-15 17:04:17.754708] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74741 ] 00:15:27.629 [2024-07-15 17:04:17.895997] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:27.629 [2024-07-15 17:04:17.896097] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:27.629 [2024-07-15 17:04:17.896106] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:27.629 [2024-07-15 17:04:17.896122] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:27.629 [2024-07-15 17:04:17.896131] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:27.629 [2024-07-15 17:04:17.896300] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:27.629 [2024-07-15 17:04:17.896382] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18292c0 0 00:15:27.629 [2024-07-15 17:04:17.903406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:27.629 [2024-07-15 17:04:17.903431] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:27.629 [2024-07-15 17:04:17.903437] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:27.629 [2024-07-15 17:04:17.903441] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:27.629 [2024-07-15 17:04:17.903510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.903520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.903526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18292c0) 00:15:27.629 [2024-07-15 17:04:17.903544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:27.629 [2024-07-15 17:04:17.903579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186a940, cid 0, qid 0 00:15:27.629 [2024-07-15 17:04:17.911383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.629 [2024-07-15 17:04:17.911406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.629 [2024-07-15 17:04:17.911412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.911418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186a940) on tqpair=0x18292c0 00:15:27.629 [2024-07-15 17:04:17.911433] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:27.629 [2024-07-15 17:04:17.911447] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:27.629 [2024-07-15 17:04:17.911454] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:27.629 [2024-07-15 17:04:17.911487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.911495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.911499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18292c0) 00:15:27.629 [2024-07-15 17:04:17.911510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.629 [2024-07-15 17:04:17.911541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186a940, cid 0, qid 0 00:15:27.629 [2024-07-15 17:04:17.911601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.629 [2024-07-15 17:04:17.911609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.629 [2024-07-15 17:04:17.911613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.911618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186a940) on tqpair=0x18292c0 00:15:27.629 [2024-07-15 17:04:17.911625] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:27.629 [2024-07-15 17:04:17.911633] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:27.629 [2024-07-15 17:04:17.911642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.911647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.911651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18292c0) 00:15:27.629 [2024-07-15 17:04:17.911659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.629 [2024-07-15 17:04:17.911689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186a940, cid 0, qid 0 00:15:27.629 [2024-07-15 17:04:17.911737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.629 [2024-07-15 17:04:17.911745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.629 [2024-07-15 17:04:17.911749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.911753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186a940) on tqpair=0x18292c0 00:15:27.629 [2024-07-15 17:04:17.911760] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:27.629 [2024-07-15 17:04:17.911770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:27.629 [2024-07-15 17:04:17.911778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.911782] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.911787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18292c0) 00:15:27.629 [2024-07-15 17:04:17.911795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.629 [2024-07-15 17:04:17.911815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186a940, cid 0, qid 0 00:15:27.629 [2024-07-15 17:04:17.911872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.629 [2024-07-15 17:04:17.911879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.629 [2024-07-15 17:04:17.911883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.911887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186a940) on tqpair=0x18292c0 00:15:27.629 [2024-07-15 17:04:17.911894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:27.629 [2024-07-15 17:04:17.911905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.629 [2024-07-15 17:04:17.911911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.911915] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18292c0) 00:15:27.630 [2024-07-15 17:04:17.911923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.630 [2024-07-15 17:04:17.911941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186a940, cid 0, qid 0 00:15:27.630 [2024-07-15 17:04:17.912000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.630 [2024-07-15 17:04:17.912008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.630 [2024-07-15 17:04:17.912012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186a940) on tqpair=0x18292c0 00:15:27.630 [2024-07-15 17:04:17.912022] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:27.630 [2024-07-15 17:04:17.912028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:27.630 [2024-07-15 17:04:17.912036] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:27.630 [2024-07-15 17:04:17.912143] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:27.630 [2024-07-15 17:04:17.912148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:27.630 [2024-07-15 17:04:17.912159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18292c0) 00:15:27.630 [2024-07-15 17:04:17.912175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.630 [2024-07-15 17:04:17.912195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186a940, cid 0, qid 0 00:15:27.630 [2024-07-15 17:04:17.912254] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.630 [2024-07-15 17:04:17.912262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.630 [2024-07-15 17:04:17.912266] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186a940) on tqpair=0x18292c0 00:15:27.630 [2024-07-15 17:04:17.912276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:27.630 [2024-07-15 17:04:17.912287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18292c0) 00:15:27.630 [2024-07-15 17:04:17.912305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.630 [2024-07-15 17:04:17.912324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186a940, cid 0, qid 0 00:15:27.630 [2024-07-15 17:04:17.912406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.630 [2024-07-15 17:04:17.912416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.630 [2024-07-15 17:04:17.912420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186a940) on tqpair=0x18292c0 00:15:27.630 [2024-07-15 17:04:17.912430] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:27.630 [2024-07-15 17:04:17.912436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:27.630 [2024-07-15 17:04:17.912445] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:27.630 [2024-07-15 17:04:17.912457] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:27.630 [2024-07-15 17:04:17.912469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18292c0) 00:15:27.630 [2024-07-15 17:04:17.912482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.630 [2024-07-15 17:04:17.912505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186a940, cid 0, qid 0 00:15:27.630 [2024-07-15 17:04:17.912599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.630 [2024-07-15 17:04:17.912606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.630 [2024-07-15 17:04:17.912611] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912615] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18292c0): datao=0, datal=4096, cccid=0 00:15:27.630 [2024-07-15 17:04:17.912621] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x186a940) on tqpair(0x18292c0): expected_datao=0, payload_size=4096 00:15:27.630 [2024-07-15 17:04:17.912626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912635] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912640] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.630 [2024-07-15 17:04:17.912655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.630 [2024-07-15 17:04:17.912659] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186a940) on tqpair=0x18292c0 00:15:27.630 [2024-07-15 17:04:17.912674] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:27.630 [2024-07-15 17:04:17.912680] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:27.630 [2024-07-15 17:04:17.912685] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:27.630 [2024-07-15 17:04:17.912690] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:27.630 [2024-07-15 17:04:17.912695] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:27.630 [2024-07-15 17:04:17.912701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:27.630 [2024-07-15 17:04:17.912711] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:27.630 [2024-07-15 17:04:17.912719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18292c0) 00:15:27.630 [2024-07-15 17:04:17.912736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.630 [2024-07-15 17:04:17.912756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186a940, cid 0, qid 0 00:15:27.630 [2024-07-15 17:04:17.912816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.630 [2024-07-15 17:04:17.912824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.630 [2024-07-15 17:04:17.912828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912832] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186a940) on tqpair=0x18292c0 00:15:27.630 [2024-07-15 17:04:17.912842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18292c0) 00:15:27.630 [2024-07-15 17:04:17.912857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.630 [2024-07-15 17:04:17.912865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18292c0) 00:15:27.630 [2024-07-15 17:04:17.912879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.630 [2024-07-15 17:04:17.912886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18292c0) 00:15:27.630 [2024-07-15 17:04:17.912900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.630 [2024-07-15 17:04:17.912906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.630 [2024-07-15 17:04:17.912920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.630 [2024-07-15 17:04:17.912926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:27.630 [2024-07-15 17:04:17.912941] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:27.630 [2024-07-15 17:04:17.912950] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.912954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18292c0) 00:15:27.630 [2024-07-15 17:04:17.912962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.630 [2024-07-15 17:04:17.912984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186a940, cid 0, qid 0 00:15:27.630 [2024-07-15 17:04:17.912992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186aac0, cid 1, qid 0 00:15:27.630 [2024-07-15 17:04:17.912997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186ac40, cid 2, qid 0 00:15:27.630 [2024-07-15 17:04:17.913002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.630 [2024-07-15 17:04:17.913008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186af40, cid 4, qid 0 00:15:27.630 [2024-07-15 17:04:17.913107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.630 [2024-07-15 17:04:17.913122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.630 [2024-07-15 17:04:17.913127] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.913132] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186af40) on tqpair=0x18292c0 00:15:27.630 [2024-07-15 17:04:17.913139] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:27.630 [2024-07-15 17:04:17.913150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:27.630 [2024-07-15 17:04:17.913162] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:27.630 [2024-07-15 17:04:17.913170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:27.630 [2024-07-15 17:04:17.913178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.913183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.630 [2024-07-15 17:04:17.913187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18292c0) 00:15:27.631 [2024-07-15 17:04:17.913195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:27.631 [2024-07-15 17:04:17.913217] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186af40, cid 4, qid 0 00:15:27.631 [2024-07-15 17:04:17.913277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.631 [2024-07-15 17:04:17.913284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.631 [2024-07-15 17:04:17.913288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186af40) on tqpair=0x18292c0 00:15:27.631 [2024-07-15 17:04:17.913370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.913392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.913402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18292c0) 00:15:27.631 [2024-07-15 17:04:17.913414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.631 [2024-07-15 17:04:17.913436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186af40, cid 4, qid 0 00:15:27.631 [2024-07-15 17:04:17.913502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.631 [2024-07-15 17:04:17.913510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.631 [2024-07-15 17:04:17.913514] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913518] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18292c0): datao=0, datal=4096, cccid=4 00:15:27.631 [2024-07-15 17:04:17.913523] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x186af40) on tqpair(0x18292c0): expected_datao=0, payload_size=4096 00:15:27.631 [2024-07-15 17:04:17.913528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913536] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913541] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.631 [2024-07-15 17:04:17.913556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.631 [2024-07-15 17:04:17.913560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186af40) on tqpair=0x18292c0 00:15:27.631 [2024-07-15 17:04:17.913582] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:27.631 [2024-07-15 17:04:17.913596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.913609] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.913618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18292c0) 00:15:27.631 [2024-07-15 17:04:17.913631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.631 [2024-07-15 17:04:17.913652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186af40, cid 4, qid 0 00:15:27.631 [2024-07-15 17:04:17.913732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.631 [2024-07-15 17:04:17.913742] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.631 [2024-07-15 17:04:17.913746] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913750] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18292c0): datao=0, datal=4096, cccid=4 00:15:27.631 [2024-07-15 17:04:17.913756] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x186af40) on tqpair(0x18292c0): expected_datao=0, payload_size=4096 00:15:27.631 [2024-07-15 17:04:17.913762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913769] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913774] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.631 [2024-07-15 17:04:17.913789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.631 [2024-07-15 17:04:17.913793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186af40) on tqpair=0x18292c0 00:15:27.631 [2024-07-15 17:04:17.913815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.913828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.913838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18292c0) 00:15:27.631 [2024-07-15 17:04:17.913851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.631 [2024-07-15 17:04:17.913872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186af40, cid 4, qid 0 00:15:27.631 [2024-07-15 17:04:17.913932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.631 [2024-07-15 17:04:17.913947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.631 [2024-07-15 17:04:17.913952] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913956] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18292c0): datao=0, datal=4096, cccid=4 00:15:27.631 [2024-07-15 17:04:17.913961] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x186af40) on tqpair(0x18292c0): expected_datao=0, payload_size=4096 00:15:27.631 [2024-07-15 17:04:17.913967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913974] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913979] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.913988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.631 [2024-07-15 17:04:17.913994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.631 [2024-07-15 17:04:17.913998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.914003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186af40) on tqpair=0x18292c0 00:15:27.631 [2024-07-15 17:04:17.914012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.914022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.914035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.914044] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.914051] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.914057] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.914063] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:27.631 [2024-07-15 17:04:17.914068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:27.631 [2024-07-15 17:04:17.914074] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:27.631 [2024-07-15 17:04:17.914095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.914100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18292c0) 00:15:27.631 [2024-07-15 17:04:17.914108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.631 [2024-07-15 17:04:17.914116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.914121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.914124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18292c0) 00:15:27.631 [2024-07-15 17:04:17.914131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.631 [2024-07-15 17:04:17.914160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186af40, cid 4, qid 0 00:15:27.631 [2024-07-15 17:04:17.914168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186b0c0, cid 5, qid 0 00:15:27.631 [2024-07-15 17:04:17.914239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.631 [2024-07-15 17:04:17.914246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.631 [2024-07-15 17:04:17.914250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.914255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186af40) on tqpair=0x18292c0 00:15:27.631 [2024-07-15 17:04:17.914262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.631 [2024-07-15 17:04:17.914268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.631 [2024-07-15 17:04:17.914272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.914276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186b0c0) on tqpair=0x18292c0 00:15:27.631 [2024-07-15 17:04:17.914287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.914293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18292c0) 00:15:27.631 [2024-07-15 17:04:17.914300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.631 [2024-07-15 17:04:17.914319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186b0c0, cid 5, qid 0 00:15:27.631 [2024-07-15 17:04:17.914390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.631 [2024-07-15 17:04:17.914399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.631 [2024-07-15 17:04:17.914410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.914414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186b0c0) on tqpair=0x18292c0 00:15:27.631 [2024-07-15 17:04:17.914426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.914431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18292c0) 00:15:27.631 [2024-07-15 17:04:17.914438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.631 [2024-07-15 17:04:17.914459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186b0c0, cid 5, qid 0 00:15:27.631 [2024-07-15 17:04:17.914512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.631 [2024-07-15 17:04:17.914519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.631 [2024-07-15 17:04:17.914523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.914527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186b0c0) on tqpair=0x18292c0 00:15:27.631 [2024-07-15 17:04:17.914538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.631 [2024-07-15 17:04:17.914543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18292c0) 00:15:27.632 [2024-07-15 17:04:17.914551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.632 [2024-07-15 17:04:17.914569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186b0c0, cid 5, qid 0 00:15:27.632 [2024-07-15 17:04:17.914626] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.632 [2024-07-15 17:04:17.914634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.632 [2024-07-15 17:04:17.914638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.914642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186b0c0) on tqpair=0x18292c0 00:15:27.632 [2024-07-15 17:04:17.914663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.914669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18292c0) 00:15:27.632 [2024-07-15 17:04:17.914677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.632 [2024-07-15 17:04:17.914686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.914691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18292c0) 00:15:27.632 [2024-07-15 17:04:17.914698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.632 [2024-07-15 17:04:17.914707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.914711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x18292c0) 00:15:27.632 [2024-07-15 17:04:17.914718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.632 [2024-07-15 17:04:17.914731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.914737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18292c0) 00:15:27.632 [2024-07-15 17:04:17.914744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.632 [2024-07-15 17:04:17.914766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186b0c0, cid 5, qid 0 00:15:27.632 [2024-07-15 17:04:17.914773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186af40, cid 4, qid 0 00:15:27.632 [2024-07-15 17:04:17.914779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186b240, cid 6, qid 0 00:15:27.632 [2024-07-15 17:04:17.914784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186b3c0, cid 7, qid 0 00:15:27.632 [2024-07-15 17:04:17.914939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.632 [2024-07-15 17:04:17.914947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.632 [2024-07-15 17:04:17.914951] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.914955] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18292c0): datao=0, datal=8192, cccid=5 00:15:27.632 [2024-07-15 17:04:17.914960] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x186b0c0) on tqpair(0x18292c0): expected_datao=0, payload_size=8192 00:15:27.632 [2024-07-15 17:04:17.914965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.914982] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.914988] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.914994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.632 [2024-07-15 17:04:17.915000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.632 [2024-07-15 17:04:17.915004] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915008] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18292c0): datao=0, datal=512, cccid=4 00:15:27.632 [2024-07-15 17:04:17.915013] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x186af40) on tqpair(0x18292c0): expected_datao=0, payload_size=512 00:15:27.632 [2024-07-15 17:04:17.915018] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915025] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915029] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.632 [2024-07-15 17:04:17.915041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.632 [2024-07-15 17:04:17.915045] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915048] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18292c0): datao=0, datal=512, cccid=6 00:15:27.632 [2024-07-15 17:04:17.915053] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x186b240) on tqpair(0x18292c0): expected_datao=0, payload_size=512 00:15:27.632 [2024-07-15 17:04:17.915057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915064] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915068] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:27.632 [2024-07-15 17:04:17.915079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:27.632 [2024-07-15 17:04:17.915083] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915087] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18292c0): datao=0, datal=4096, cccid=7 00:15:27.632 [2024-07-15 17:04:17.915091] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x186b3c0) on tqpair(0x18292c0): expected_datao=0, payload_size=4096 00:15:27.632 [2024-07-15 17:04:17.915097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915104] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915108] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.632 [2024-07-15 17:04:17.915122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.632 [2024-07-15 17:04:17.915126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186b0c0) on tqpair=0x18292c0 00:15:27.632 [2024-07-15 17:04:17.915150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.632 [2024-07-15 17:04:17.915158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.632 [2024-07-15 17:04:17.915162] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.632 [2024-07-15 17:04:17.915166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186af40) on tqpair=0x18292c0 00:15:27.632 ===================================================== 00:15:27.632 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:27.632 ===================================================== 00:15:27.632 Controller Capabilities/Features 00:15:27.632 ================================ 00:15:27.632 Vendor ID: 8086 00:15:27.632 Subsystem Vendor ID: 8086 00:15:27.632 Serial Number: SPDK00000000000001 00:15:27.632 Model Number: SPDK bdev Controller 00:15:27.632 Firmware Version: 24.09 00:15:27.632 Recommended Arb Burst: 6 00:15:27.632 IEEE OUI Identifier: e4 d2 5c 00:15:27.632 Multi-path I/O 00:15:27.632 May have multiple subsystem ports: Yes 00:15:27.632 May have multiple controllers: Yes 00:15:27.632 Associated with SR-IOV VF: No 00:15:27.632 Max Data Transfer Size: 131072 00:15:27.632 Max Number of Namespaces: 32 00:15:27.632 Max Number of I/O Queues: 127 00:15:27.632 NVMe Specification Version (VS): 1.3 00:15:27.632 NVMe Specification Version (Identify): 1.3 00:15:27.632 Maximum Queue Entries: 128 00:15:27.632 Contiguous Queues Required: Yes 00:15:27.632 Arbitration Mechanisms Supported 00:15:27.632 Weighted Round Robin: Not Supported 00:15:27.632 Vendor Specific: Not Supported 00:15:27.632 Reset Timeout: 15000 ms 00:15:27.632 Doorbell Stride: 4 bytes 00:15:27.632 NVM Subsystem Reset: Not Supported 00:15:27.632 Command Sets Supported 00:15:27.632 NVM Command Set: Supported 00:15:27.632 Boot Partition: Not Supported 00:15:27.632 Memory Page Size Minimum: 4096 bytes 00:15:27.632 Memory Page Size Maximum: 4096 bytes 00:15:27.632 Persistent Memory Region: Not Supported 00:15:27.632 Optional Asynchronous Events Supported 00:15:27.632 Namespace Attribute Notices: Supported 00:15:27.632 Firmware Activation Notices: Not Supported 00:15:27.632 ANA Change Notices: Not Supported 00:15:27.632 PLE Aggregate Log Change Notices: Not Supported 00:15:27.632 LBA Status Info Alert Notices: Not Supported 00:15:27.632 EGE Aggregate Log Change Notices: Not Supported 00:15:27.632 Normal NVM Subsystem Shutdown event: Not Supported 00:15:27.632 Zone Descriptor Change Notices: Not Supported 00:15:27.632 Discovery Log Change Notices: Not Supported 00:15:27.632 Controller Attributes 00:15:27.632 128-bit Host Identifier: Supported 00:15:27.632 Non-Operational Permissive Mode: Not Supported 00:15:27.632 NVM Sets: Not Supported 00:15:27.632 Read Recovery Levels: Not Supported 00:15:27.632 Endurance Groups: Not Supported 00:15:27.632 Predictable Latency Mode: Not Supported 00:15:27.632 Traffic Based Keep ALive: Not Supported 00:15:27.632 Namespace Granularity: Not Supported 00:15:27.632 SQ Associations: Not Supported 00:15:27.632 UUID List: Not Supported 00:15:27.632 Multi-Domain Subsystem: Not Supported 00:15:27.632 Fixed Capacity Management: Not Supported 00:15:27.632 Variable Capacity Management: Not Supported 00:15:27.632 Delete Endurance Group: Not Supported 00:15:27.632 Delete NVM Set: Not Supported 00:15:27.632 Extended LBA Formats Supported: Not Supported 00:15:27.632 Flexible Data Placement Supported: Not Supported 00:15:27.632 00:15:27.632 Controller Memory Buffer Support 00:15:27.632 ================================ 00:15:27.632 Supported: No 00:15:27.632 00:15:27.632 Persistent Memory Region Support 00:15:27.632 ================================ 00:15:27.632 Supported: No 00:15:27.632 00:15:27.632 Admin Command Set Attributes 00:15:27.632 ============================ 00:15:27.632 Security Send/Receive: Not Supported 00:15:27.632 Format NVM: Not Supported 00:15:27.632 Firmware Activate/Download: Not Supported 00:15:27.632 Namespace Management: Not Supported 00:15:27.632 Device Self-Test: Not Supported 00:15:27.632 Directives: Not Supported 00:15:27.632 NVMe-MI: Not Supported 00:15:27.632 Virtualization Management: Not Supported 00:15:27.632 Doorbell Buffer Config: Not Supported 00:15:27.632 Get LBA Status Capability: Not Supported 00:15:27.632 Command & Feature Lockdown Capability: Not Supported 00:15:27.632 Abort Command Limit: 4 00:15:27.632 Async Event Request Limit: 4 00:15:27.632 Number of Firmware Slots: N/A 00:15:27.632 Firmware Slot 1 Read-Only: N/A 00:15:27.632 Firmware Activation Without Reset: [2024-07-15 17:04:17.915180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.633 [2024-07-15 17:04:17.915188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.633 [2024-07-15 17:04:17.915191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.633 [2024-07-15 17:04:17.915196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186b240) on tqpair=0x18292c0 00:15:27.633 [2024-07-15 17:04:17.915204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.633 [2024-07-15 17:04:17.915210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.633 [2024-07-15 17:04:17.915214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.633 [2024-07-15 17:04:17.915218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186b3c0) on tqpair=0x18292c0 00:15:27.633 N/A 00:15:27.633 Multiple Update Detection Support: N/A 00:15:27.633 Firmware Update Granularity: No Information Provided 00:15:27.633 Per-Namespace SMART Log: No 00:15:27.633 Asymmetric Namespace Access Log Page: Not Supported 00:15:27.633 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:27.633 Command Effects Log Page: Supported 00:15:27.633 Get Log Page Extended Data: Supported 00:15:27.633 Telemetry Log Pages: Not Supported 00:15:27.633 Persistent Event Log Pages: Not Supported 00:15:27.633 Supported Log Pages Log Page: May Support 00:15:27.633 Commands Supported & Effects Log Page: Not Supported 00:15:27.633 Feature Identifiers & Effects Log Page:May Support 00:15:27.633 NVMe-MI Commands & Effects Log Page: May Support 00:15:27.633 Data Area 4 for Telemetry Log: Not Supported 00:15:27.633 Error Log Page Entries Supported: 128 00:15:27.633 Keep Alive: Supported 00:15:27.633 Keep Alive Granularity: 10000 ms 00:15:27.633 00:15:27.633 NVM Command Set Attributes 00:15:27.633 ========================== 00:15:27.633 Submission Queue Entry Size 00:15:27.633 Max: 64 00:15:27.633 Min: 64 00:15:27.633 Completion Queue Entry Size 00:15:27.633 Max: 16 00:15:27.633 Min: 16 00:15:27.633 Number of Namespaces: 32 00:15:27.633 Compare Command: Supported 00:15:27.633 Write Uncorrectable Command: Not Supported 00:15:27.633 Dataset Management Command: Supported 00:15:27.633 Write Zeroes Command: Supported 00:15:27.633 Set Features Save Field: Not Supported 00:15:27.633 Reservations: Supported 00:15:27.633 Timestamp: Not Supported 00:15:27.633 Copy: Supported 00:15:27.633 Volatile Write Cache: Present 00:15:27.633 Atomic Write Unit (Normal): 1 00:15:27.633 Atomic Write Unit (PFail): 1 00:15:27.633 Atomic Compare & Write Unit: 1 00:15:27.633 Fused Compare & Write: Supported 00:15:27.633 Scatter-Gather List 00:15:27.633 SGL Command Set: Supported 00:15:27.633 SGL Keyed: Supported 00:15:27.633 SGL Bit Bucket Descriptor: Not Supported 00:15:27.633 SGL Metadata Pointer: Not Supported 00:15:27.633 Oversized SGL: Not Supported 00:15:27.633 SGL Metadata Address: Not Supported 00:15:27.633 SGL Offset: Supported 00:15:27.633 Transport SGL Data Block: Not Supported 00:15:27.633 Replay Protected Memory Block: Not Supported 00:15:27.633 00:15:27.633 Firmware Slot Information 00:15:27.633 ========================= 00:15:27.633 Active slot: 1 00:15:27.633 Slot 1 Firmware Revision: 24.09 00:15:27.633 00:15:27.633 00:15:27.633 Commands Supported and Effects 00:15:27.633 ============================== 00:15:27.633 Admin Commands 00:15:27.633 -------------- 00:15:27.633 Get Log Page (02h): Supported 00:15:27.633 Identify (06h): Supported 00:15:27.633 Abort (08h): Supported 00:15:27.633 Set Features (09h): Supported 00:15:27.633 Get Features (0Ah): Supported 00:15:27.633 Asynchronous Event Request (0Ch): Supported 00:15:27.633 Keep Alive (18h): Supported 00:15:27.633 I/O Commands 00:15:27.633 ------------ 00:15:27.633 Flush (00h): Supported LBA-Change 00:15:27.633 Write (01h): Supported LBA-Change 00:15:27.633 Read (02h): Supported 00:15:27.633 Compare (05h): Supported 00:15:27.633 Write Zeroes (08h): Supported LBA-Change 00:15:27.633 Dataset Management (09h): Supported LBA-Change 00:15:27.633 Copy (19h): Supported LBA-Change 00:15:27.633 00:15:27.633 Error Log 00:15:27.633 ========= 00:15:27.633 00:15:27.633 Arbitration 00:15:27.633 =========== 00:15:27.633 Arbitration Burst: 1 00:15:27.633 00:15:27.633 Power Management 00:15:27.633 ================ 00:15:27.633 Number of Power States: 1 00:15:27.633 Current Power State: Power State #0 00:15:27.633 Power State #0: 00:15:27.633 Max Power: 0.00 W 00:15:27.633 Non-Operational State: Operational 00:15:27.633 Entry Latency: Not Reported 00:15:27.633 Exit Latency: Not Reported 00:15:27.633 Relative Read Throughput: 0 00:15:27.633 Relative Read Latency: 0 00:15:27.633 Relative Write Throughput: 0 00:15:27.633 Relative Write Latency: 0 00:15:27.633 Idle Power: Not Reported 00:15:27.633 Active Power: Not Reported 00:15:27.633 Non-Operational Permissive Mode: Not Supported 00:15:27.633 00:15:27.633 Health Information 00:15:27.633 ================== 00:15:27.633 Critical Warnings: 00:15:27.633 Available Spare Space: OK 00:15:27.633 Temperature: OK 00:15:27.633 Device Reliability: OK 00:15:27.633 Read Only: No 00:15:27.633 Volatile Memory Backup: OK 00:15:27.633 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:27.633 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:27.633 Available Spare: 0% 00:15:27.633 Available Spare Threshold: 0% 00:15:27.633 Life Percentage Used:[2024-07-15 17:04:17.915337] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.633 [2024-07-15 17:04:17.915345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18292c0) 00:15:27.893 [2024-07-15 17:04:17.919366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.893 [2024-07-15 17:04:17.919424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186b3c0, cid 7, qid 0 00:15:27.893 [2024-07-15 17:04:17.919501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.893 [2024-07-15 17:04:17.919511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.893 [2024-07-15 17:04:17.919516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.919521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186b3c0) on tqpair=0x18292c0 00:15:27.893 [2024-07-15 17:04:17.919603] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:27.893 [2024-07-15 17:04:17.919623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186a940) on tqpair=0x18292c0 00:15:27.893 [2024-07-15 17:04:17.919633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.893 [2024-07-15 17:04:17.919640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186aac0) on tqpair=0x18292c0 00:15:27.893 [2024-07-15 17:04:17.919645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.893 [2024-07-15 17:04:17.919651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186ac40) on tqpair=0x18292c0 00:15:27.893 [2024-07-15 17:04:17.919656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.893 [2024-07-15 17:04:17.919661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.893 [2024-07-15 17:04:17.919666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.893 [2024-07-15 17:04:17.919677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.919694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.919705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.893 [2024-07-15 17:04:17.919721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.893 [2024-07-15 17:04:17.919792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.893 [2024-07-15 17:04:17.919936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.893 [2024-07-15 17:04:17.919963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.893 [2024-07-15 17:04:17.919973] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.919984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.893 [2024-07-15 17:04:17.920003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920018] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.893 [2024-07-15 17:04:17.920044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.893 [2024-07-15 17:04:17.920100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.893 [2024-07-15 17:04:17.920268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.893 [2024-07-15 17:04:17.920277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.893 [2024-07-15 17:04:17.920280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.893 [2024-07-15 17:04:17.920291] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:27.893 [2024-07-15 17:04:17.920297] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:27.893 [2024-07-15 17:04:17.920316] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.893 [2024-07-15 17:04:17.920338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.893 [2024-07-15 17:04:17.920415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.893 [2024-07-15 17:04:17.920532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.893 [2024-07-15 17:04:17.920549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.893 [2024-07-15 17:04:17.920560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.893 [2024-07-15 17:04:17.920599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.893 [2024-07-15 17:04:17.920651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.893 [2024-07-15 17:04:17.920684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.893 [2024-07-15 17:04:17.920750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.893 [2024-07-15 17:04:17.920763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.893 [2024-07-15 17:04:17.920768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.893 [2024-07-15 17:04:17.920784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.893 [2024-07-15 17:04:17.920802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.893 [2024-07-15 17:04:17.920821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.893 [2024-07-15 17:04:17.920884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.893 [2024-07-15 17:04:17.920891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.893 [2024-07-15 17:04:17.920895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.893 [2024-07-15 17:04:17.920910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.920920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.893 [2024-07-15 17:04:17.920927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.893 [2024-07-15 17:04:17.920946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.893 [2024-07-15 17:04:17.921007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.893 [2024-07-15 17:04:17.921016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.893 [2024-07-15 17:04:17.921021] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.921025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.893 [2024-07-15 17:04:17.921036] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.921042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.893 [2024-07-15 17:04:17.921046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.893 [2024-07-15 17:04:17.921054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.893 [2024-07-15 17:04:17.921072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.893 [2024-07-15 17:04:17.921122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.893 [2024-07-15 17:04:17.921135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.893 [2024-07-15 17:04:17.921139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.894 [2024-07-15 17:04:17.921155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.894 [2024-07-15 17:04:17.921173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.894 [2024-07-15 17:04:17.921192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.894 [2024-07-15 17:04:17.921242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.894 [2024-07-15 17:04:17.921249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.894 [2024-07-15 17:04:17.921253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.894 [2024-07-15 17:04:17.921269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.894 [2024-07-15 17:04:17.921286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.894 [2024-07-15 17:04:17.921303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.894 [2024-07-15 17:04:17.921372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.894 [2024-07-15 17:04:17.921381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.894 [2024-07-15 17:04:17.921385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.894 [2024-07-15 17:04:17.921401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.894 [2024-07-15 17:04:17.921418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.894 [2024-07-15 17:04:17.921439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.894 [2024-07-15 17:04:17.921506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.894 [2024-07-15 17:04:17.921513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.894 [2024-07-15 17:04:17.921517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.894 [2024-07-15 17:04:17.921532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.894 [2024-07-15 17:04:17.921549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.894 [2024-07-15 17:04:17.921568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.894 [2024-07-15 17:04:17.921620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.894 [2024-07-15 17:04:17.921638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.894 [2024-07-15 17:04:17.921643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.921647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.894 [2024-07-15 17:04:17.925375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.925396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.925402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18292c0) 00:15:27.894 [2024-07-15 17:04:17.925412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:27.894 [2024-07-15 17:04:17.925442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x186adc0, cid 3, qid 0 00:15:27.894 [2024-07-15 17:04:17.925495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:27.894 [2024-07-15 17:04:17.925504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:27.894 [2024-07-15 17:04:17.925508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:27.894 [2024-07-15 17:04:17.925512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x186adc0) on tqpair=0x18292c0 00:15:27.894 [2024-07-15 17:04:17.925522] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:15:27.894 0% 00:15:27.894 Data Units Read: 0 00:15:27.894 Data Units Written: 0 00:15:27.894 Host Read Commands: 0 00:15:27.894 Host Write Commands: 0 00:15:27.894 Controller Busy Time: 0 minutes 00:15:27.894 Power Cycles: 0 00:15:27.894 Power On Hours: 0 hours 00:15:27.894 Unsafe Shutdowns: 0 00:15:27.894 Unrecoverable Media Errors: 0 00:15:27.894 Lifetime Error Log Entries: 0 00:15:27.894 Warning Temperature Time: 0 minutes 00:15:27.894 Critical Temperature Time: 0 minutes 00:15:27.894 00:15:27.894 Number of Queues 00:15:27.894 ================ 00:15:27.894 Number of I/O Submission Queues: 127 00:15:27.894 Number of I/O Completion Queues: 127 00:15:27.894 00:15:27.894 Active Namespaces 00:15:27.894 ================= 00:15:27.894 Namespace ID:1 00:15:27.894 Error Recovery Timeout: Unlimited 00:15:27.894 Command Set Identifier: NVM (00h) 00:15:27.894 Deallocate: Supported 00:15:27.894 Deallocated/Unwritten Error: Not Supported 00:15:27.894 Deallocated Read Value: Unknown 00:15:27.894 Deallocate in Write Zeroes: Not Supported 00:15:27.894 Deallocated Guard Field: 0xFFFF 00:15:27.894 Flush: Supported 00:15:27.894 Reservation: Supported 00:15:27.894 Namespace Sharing Capabilities: Multiple Controllers 00:15:27.894 Size (in LBAs): 131072 (0GiB) 00:15:27.894 Capacity (in LBAs): 131072 (0GiB) 00:15:27.894 Utilization (in LBAs): 131072 (0GiB) 00:15:27.894 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:27.894 EUI64: ABCDEF0123456789 00:15:27.894 UUID: fabe60b2-9e3e-4845-a673-22357cb73efa 00:15:27.894 Thin Provisioning: Not Supported 00:15:27.894 Per-NS Atomic Units: Yes 00:15:27.894 Atomic Boundary Size (Normal): 0 00:15:27.894 Atomic Boundary Size (PFail): 0 00:15:27.894 Atomic Boundary Offset: 0 00:15:27.894 Maximum Single Source Range Length: 65535 00:15:27.894 Maximum Copy Length: 65535 00:15:27.894 Maximum Source Range Count: 1 00:15:27.894 NGUID/EUI64 Never Reused: No 00:15:27.894 Namespace Write Protected: No 00:15:27.894 Number of LBA Formats: 1 00:15:27.894 Current LBA Format: LBA Format #00 00:15:27.894 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:27.894 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.894 17:04:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.894 rmmod nvme_tcp 00:15:27.894 rmmod nvme_fabrics 00:15:27.894 rmmod nvme_keyring 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74698 ']' 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74698 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74698 ']' 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74698 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74698 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74698' 00:15:27.894 killing process with pid 74698 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74698 00:15:27.894 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74698 00:15:28.153 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.153 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.153 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.153 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.153 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.153 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.153 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.153 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.413 17:04:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:28.413 00:15:28.413 real 0m2.700s 00:15:28.413 user 0m7.183s 00:15:28.413 sys 0m0.741s 00:15:28.413 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.413 17:04:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:28.413 ************************************ 00:15:28.413 END TEST nvmf_identify 00:15:28.413 ************************************ 00:15:28.413 17:04:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:28.413 17:04:18 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:28.413 17:04:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:28.413 17:04:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.413 17:04:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.413 ************************************ 00:15:28.413 START TEST nvmf_perf 00:15:28.413 ************************************ 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:28.413 * Looking for test storage... 00:15:28.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.413 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:28.414 Cannot find device "nvmf_tgt_br" 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.414 Cannot find device "nvmf_tgt_br2" 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:28.414 Cannot find device "nvmf_tgt_br" 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:28.414 Cannot find device "nvmf_tgt_br2" 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:28.414 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:28.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:15:28.673 00:15:28.673 --- 10.0.0.2 ping statistics --- 00:15:28.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.673 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:28.673 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:28.673 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:28.673 00:15:28.673 --- 10.0.0.3 ping statistics --- 00:15:28.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.673 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:28.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:28.673 00:15:28.673 --- 10.0.0.1 ping statistics --- 00:15:28.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.673 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74909 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74909 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 74909 ']' 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.673 17:04:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:28.932 [2024-07-15 17:04:19.010192] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:28.932 [2024-07-15 17:04:19.010273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.932 [2024-07-15 17:04:19.146788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.190 [2024-07-15 17:04:19.276281] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.190 [2024-07-15 17:04:19.276595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.190 [2024-07-15 17:04:19.276770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.190 [2024-07-15 17:04:19.276922] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.190 [2024-07-15 17:04:19.276992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.190 [2024-07-15 17:04:19.277279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.190 [2024-07-15 17:04:19.277381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.190 [2024-07-15 17:04:19.277474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.190 [2024-07-15 17:04:19.277474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.190 [2024-07-15 17:04:19.336567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:29.757 17:04:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.757 17:04:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:15:29.757 17:04:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:29.757 17:04:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.757 17:04:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:30.016 17:04:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.016 17:04:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:30.016 17:04:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:30.275 17:04:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:30.275 17:04:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:30.536 17:04:20 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:30.536 17:04:20 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:30.795 17:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:30.795 17:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:30.795 17:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:30.795 17:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:30.795 17:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:31.053 [2024-07-15 17:04:21.290760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.053 17:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:31.312 17:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:31.312 17:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:31.570 17:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:31.570 17:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:31.828 17:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.088 [2024-07-15 17:04:22.324375] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.088 17:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:32.347 17:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:32.347 17:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:32.347 17:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:32.347 17:04:22 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:33.722 Initializing NVMe Controllers 00:15:33.722 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:33.722 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:33.722 Initialization complete. Launching workers. 00:15:33.722 ======================================================== 00:15:33.722 Latency(us) 00:15:33.722 Device Information : IOPS MiB/s Average min max 00:15:33.722 PCIE (0000:00:10.0) NSID 1 from core 0: 23483.66 91.73 1362.61 359.79 6699.88 00:15:33.722 ======================================================== 00:15:33.722 Total : 23483.66 91.73 1362.61 359.79 6699.88 00:15:33.722 00:15:33.722 17:04:23 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:35.120 Initializing NVMe Controllers 00:15:35.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:35.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:35.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:35.120 Initialization complete. Launching workers. 00:15:35.120 ======================================================== 00:15:35.120 Latency(us) 00:15:35.120 Device Information : IOPS MiB/s Average min max 00:15:35.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3553.91 13.88 281.09 106.13 4256.10 00:15:35.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.27 6020.99 12010.08 00:15:35.120 ======================================================== 00:15:35.120 Total : 3677.91 14.37 545.65 106.13 12010.08 00:15:35.120 00:15:35.120 17:04:25 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:36.493 Initializing NVMe Controllers 00:15:36.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:36.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:36.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:36.493 Initialization complete. Launching workers. 00:15:36.493 ======================================================== 00:15:36.493 Latency(us) 00:15:36.493 Device Information : IOPS MiB/s Average min max 00:15:36.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8659.64 33.83 3695.31 658.39 7690.14 00:15:36.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3975.23 15.53 8049.73 6182.51 15918.57 00:15:36.493 ======================================================== 00:15:36.493 Total : 12634.86 49.35 5065.31 658.39 15918.57 00:15:36.493 00:15:36.493 17:04:26 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:36.493 17:04:26 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:39.023 Initializing NVMe Controllers 00:15:39.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:39.023 Controller IO queue size 128, less than required. 00:15:39.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:39.023 Controller IO queue size 128, less than required. 00:15:39.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:39.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:39.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:39.023 Initialization complete. Launching workers. 00:15:39.023 ======================================================== 00:15:39.023 Latency(us) 00:15:39.023 Device Information : IOPS MiB/s Average min max 00:15:39.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1748.66 437.16 74790.60 44657.60 124616.67 00:15:39.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 638.15 159.54 203055.09 63058.11 312414.64 00:15:39.023 ======================================================== 00:15:39.023 Total : 2386.81 596.70 109083.91 44657.60 312414.64 00:15:39.023 00:15:39.023 17:04:28 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:39.023 Initializing NVMe Controllers 00:15:39.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:39.023 Controller IO queue size 128, less than required. 00:15:39.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:39.023 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:39.023 Controller IO queue size 128, less than required. 00:15:39.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:39.023 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:39.023 WARNING: Some requested NVMe devices were skipped 00:15:39.023 No valid NVMe controllers or AIO or URING devices found 00:15:39.023 17:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:41.570 Initializing NVMe Controllers 00:15:41.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:41.570 Controller IO queue size 128, less than required. 00:15:41.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.570 Controller IO queue size 128, less than required. 00:15:41.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:41.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:41.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:41.570 Initialization complete. Launching workers. 00:15:41.570 00:15:41.570 ==================== 00:15:41.570 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:41.570 TCP transport: 00:15:41.570 polls: 12002 00:15:41.570 idle_polls: 7866 00:15:41.570 sock_completions: 4136 00:15:41.570 nvme_completions: 6519 00:15:41.570 submitted_requests: 9644 00:15:41.570 queued_requests: 1 00:15:41.570 00:15:41.570 ==================== 00:15:41.570 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:41.570 TCP transport: 00:15:41.570 polls: 10191 00:15:41.570 idle_polls: 6351 00:15:41.570 sock_completions: 3840 00:15:41.570 nvme_completions: 6911 00:15:41.570 submitted_requests: 10458 00:15:41.570 queued_requests: 1 00:15:41.570 ======================================================== 00:15:41.570 Latency(us) 00:15:41.570 Device Information : IOPS MiB/s Average min max 00:15:41.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1629.45 407.36 80435.28 44666.77 121550.20 00:15:41.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1727.44 431.86 74090.93 37764.69 124690.54 00:15:41.570 ======================================================== 00:15:41.570 Total : 3356.89 839.22 77170.50 37764.69 124690.54 00:15:41.570 00:15:41.570 17:04:31 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:41.570 17:04:31 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.841 rmmod nvme_tcp 00:15:41.841 rmmod nvme_fabrics 00:15:41.841 rmmod nvme_keyring 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74909 ']' 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74909 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 74909 ']' 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 74909 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.841 17:04:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74909 00:15:41.841 17:04:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:41.841 killing process with pid 74909 00:15:41.841 17:04:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:41.841 17:04:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74909' 00:15:41.841 17:04:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 74909 00:15:41.841 17:04:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 74909 00:15:42.788 17:04:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:42.788 17:04:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:42.788 17:04:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:42.788 17:04:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.788 17:04:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:42.788 17:04:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.788 17:04:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.788 17:04:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.788 17:04:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:42.788 ************************************ 00:15:42.788 END TEST nvmf_perf 00:15:42.788 ************************************ 00:15:42.788 00:15:42.788 real 0m14.274s 00:15:42.788 user 0m52.850s 00:15:42.788 sys 0m3.998s 00:15:42.788 17:04:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:42.788 17:04:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:42.788 17:04:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:42.788 17:04:32 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:42.788 17:04:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:42.788 17:04:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.788 17:04:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:42.788 ************************************ 00:15:42.788 START TEST nvmf_fio_host 00:15:42.788 ************************************ 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:42.788 * Looking for test storage... 00:15:42.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.788 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:42.789 Cannot find device "nvmf_tgt_br" 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.789 Cannot find device "nvmf_tgt_br2" 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:42.789 17:04:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:42.789 Cannot find device "nvmf_tgt_br" 00:15:42.789 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:42.789 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:42.789 Cannot find device "nvmf_tgt_br2" 00:15:42.789 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:42.789 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:42.789 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:42.789 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.789 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:42.789 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.789 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:42.789 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:43.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:43.049 00:15:43.049 --- 10.0.0.2 ping statistics --- 00:15:43.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.049 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:43.049 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.049 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:15:43.049 00:15:43.049 --- 10.0.0.3 ping statistics --- 00:15:43.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.049 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:43.049 00:15:43.049 --- 10.0.0.1 ping statistics --- 00:15:43.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.049 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75317 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75317 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75317 ']' 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.049 17:04:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.049 [2024-07-15 17:04:33.346580] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:43.308 [2024-07-15 17:04:33.346957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.308 [2024-07-15 17:04:33.489998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.308 [2024-07-15 17:04:33.594299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.308 [2024-07-15 17:04:33.594660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.308 [2024-07-15 17:04:33.594682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.308 [2024-07-15 17:04:33.594693] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.308 [2024-07-15 17:04:33.594700] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.308 [2024-07-15 17:04:33.594805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.308 [2024-07-15 17:04:33.594950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.308 [2024-07-15 17:04:33.595443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.308 [2024-07-15 17:04:33.595452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.565 [2024-07-15 17:04:33.648542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:44.130 17:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.130 17:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:15:44.130 17:04:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:44.388 [2024-07-15 17:04:34.511705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.388 17:04:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:44.388 17:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.388 17:04:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.388 17:04:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:44.646 Malloc1 00:15:44.646 17:04:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:44.985 17:04:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.243 17:04:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.502 [2024-07-15 17:04:35.575281] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.502 17:04:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:45.761 17:04:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:45.761 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:45.761 fio-3.35 00:15:45.761 Starting 1 thread 00:15:48.292 00:15:48.292 test: (groupid=0, jobs=1): err= 0: pid=75399: Mon Jul 15 17:04:38 2024 00:15:48.292 read: IOPS=8486, BW=33.2MiB/s (34.8MB/s)(66.5MiB/2007msec) 00:15:48.292 slat (usec): min=2, max=328, avg= 2.52, stdev= 3.23 00:15:48.292 clat (usec): min=2610, max=17375, avg=7857.76, stdev=1052.95 00:15:48.292 lat (usec): min=2657, max=17378, avg=7860.28, stdev=1052.77 00:15:48.292 clat percentiles (usec): 00:15:48.292 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:15:48.292 | 30.00th=[ 7373], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7767], 00:15:48.292 | 70.00th=[ 7963], 80.00th=[ 8225], 90.00th=[ 9372], 95.00th=[ 9896], 00:15:48.292 | 99.00th=[11076], 99.50th=[12780], 99.90th=[16581], 99.95th=[16909], 00:15:48.292 | 99.99th=[17433] 00:15:48.292 bw ( KiB/s): min=30576, max=35656, per=99.98%, avg=33942.00, stdev=2329.35, samples=4 00:15:48.292 iops : min= 7644, max= 8914, avg=8485.50, stdev=582.34, samples=4 00:15:48.292 write: IOPS=8487, BW=33.2MiB/s (34.8MB/s)(66.5MiB/2007msec); 0 zone resets 00:15:48.292 slat (usec): min=2, max=258, avg= 2.61, stdev= 2.19 00:15:48.292 clat (usec): min=2458, max=16401, avg=7155.96, stdev=915.39 00:15:48.292 lat (usec): min=2473, max=16403, avg=7158.57, stdev=915.30 00:15:48.292 clat percentiles (usec): 00:15:48.292 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:15:48.292 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:15:48.292 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 8455], 95.00th=[ 8979], 00:15:48.292 | 99.00th=[ 9896], 99.50th=[11469], 99.90th=[14615], 99.95th=[14877], 00:15:48.292 | 99.99th=[15795] 00:15:48.292 bw ( KiB/s): min=30152, max=35712, per=99.97%, avg=33938.00, stdev=2550.51, samples=4 00:15:48.292 iops : min= 7538, max= 8928, avg=8484.50, stdev=637.63, samples=4 00:15:48.292 lat (msec) : 4=0.09%, 10=97.11%, 20=2.80% 00:15:48.292 cpu : usr=70.89%, sys=21.64%, ctx=15, majf=0, minf=7 00:15:48.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:48.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:48.292 issued rwts: total=17033,17034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:48.292 00:15:48.292 Run status group 0 (all jobs): 00:15:48.292 READ: bw=33.2MiB/s (34.8MB/s), 33.2MiB/s-33.2MiB/s (34.8MB/s-34.8MB/s), io=66.5MiB (69.8MB), run=2007-2007msec 00:15:48.292 WRITE: bw=33.2MiB/s (34.8MB/s), 33.2MiB/s-33.2MiB/s (34.8MB/s-34.8MB/s), io=66.5MiB (69.8MB), run=2007-2007msec 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:48.292 17:04:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:48.292 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:48.292 fio-3.35 00:15:48.292 Starting 1 thread 00:15:50.829 00:15:50.829 test: (groupid=0, jobs=1): err= 0: pid=75443: Mon Jul 15 17:04:40 2024 00:15:50.829 read: IOPS=8051, BW=126MiB/s (132MB/s)(253MiB/2008msec) 00:15:50.829 slat (usec): min=3, max=120, avg= 3.69, stdev= 1.72 00:15:50.829 clat (usec): min=2196, max=24778, avg=8885.53, stdev=2821.00 00:15:50.829 lat (usec): min=2199, max=24782, avg=8889.23, stdev=2821.03 00:15:50.829 clat percentiles (usec): 00:15:50.829 | 1.00th=[ 3982], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 6325], 00:15:50.829 | 30.00th=[ 7177], 40.00th=[ 7898], 50.00th=[ 8586], 60.00th=[ 9241], 00:15:50.829 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12780], 95.00th=[14353], 00:15:50.829 | 99.00th=[15926], 99.50th=[16319], 99.90th=[21365], 99.95th=[21365], 00:15:50.829 | 99.99th=[21627] 00:15:50.829 bw ( KiB/s): min=54336, max=77632, per=51.08%, avg=65800.00, stdev=10910.78, samples=4 00:15:50.829 iops : min= 3396, max= 4852, avg=4112.50, stdev=681.92, samples=4 00:15:50.829 write: IOPS=4748, BW=74.2MiB/s (77.8MB/s)(135MiB/1822msec); 0 zone resets 00:15:50.829 slat (usec): min=35, max=166, avg=37.83, stdev= 4.64 00:15:50.829 clat (usec): min=3928, max=24415, avg=12266.50, stdev=2568.45 00:15:50.829 lat (usec): min=3965, max=24451, avg=12304.34, stdev=2568.26 00:15:50.829 clat percentiles (usec): 00:15:50.829 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10159], 00:15:50.829 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:15:50.829 | 70.00th=[13042], 80.00th=[14222], 90.00th=[15664], 95.00th=[17171], 00:15:50.829 | 99.00th=[19792], 99.50th=[21103], 99.90th=[22414], 99.95th=[22676], 00:15:50.829 | 99.99th=[24511] 00:15:50.829 bw ( KiB/s): min=56768, max=80224, per=90.36%, avg=68648.00, stdev=11024.87, samples=4 00:15:50.829 iops : min= 3548, max= 5014, avg=4290.50, stdev=689.05, samples=4 00:15:50.829 lat (msec) : 4=0.66%, 10=49.75%, 20=49.14%, 50=0.46% 00:15:50.829 cpu : usr=83.32%, sys=12.45%, ctx=4, majf=0, minf=12 00:15:50.829 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:50.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:50.829 issued rwts: total=16168,8651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:50.829 00:15:50.829 Run status group 0 (all jobs): 00:15:50.829 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=253MiB (265MB), run=2008-2008msec 00:15:50.829 WRITE: bw=74.2MiB/s (77.8MB/s), 74.2MiB/s-74.2MiB/s (77.8MB/s-77.8MB/s), io=135MiB (142MB), run=1822-1822msec 00:15:50.829 17:04:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.829 17:04:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:50.829 17:04:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:50.829 17:04:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:50.829 17:04:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:50.829 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:50.829 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.087 rmmod nvme_tcp 00:15:51.087 rmmod nvme_fabrics 00:15:51.087 rmmod nvme_keyring 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75317 ']' 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75317 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75317 ']' 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75317 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75317 00:15:51.087 killing process with pid 75317 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75317' 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75317 00:15:51.087 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75317 00:15:51.344 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.344 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.344 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.344 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.344 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.344 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.344 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.344 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.344 17:04:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:51.344 00:15:51.344 real 0m8.681s 00:15:51.344 user 0m35.615s 00:15:51.344 sys 0m2.264s 00:15:51.344 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:51.344 17:04:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.344 ************************************ 00:15:51.344 END TEST nvmf_fio_host 00:15:51.344 ************************************ 00:15:51.344 17:04:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:51.344 17:04:41 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:51.344 17:04:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:51.344 17:04:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.344 17:04:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.344 ************************************ 00:15:51.344 START TEST nvmf_failover 00:15:51.344 ************************************ 00:15:51.344 17:04:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:51.344 * Looking for test storage... 00:15:51.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:51.602 Cannot find device "nvmf_tgt_br" 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.602 Cannot find device "nvmf_tgt_br2" 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:51.602 Cannot find device "nvmf_tgt_br" 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:51.602 Cannot find device "nvmf_tgt_br2" 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:51.602 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:51.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:51.860 00:15:51.860 --- 10.0.0.2 ping statistics --- 00:15:51.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.860 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:51.860 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.860 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:51.860 00:15:51.860 --- 10.0.0.3 ping statistics --- 00:15:51.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.860 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:51.860 17:04:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:51.860 00:15:51.860 --- 10.0.0.1 ping statistics --- 00:15:51.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.860 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75658 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75658 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:51.860 17:04:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75658 ']' 00:15:51.861 17:04:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.861 17:04:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.861 17:04:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.861 17:04:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.861 17:04:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:51.861 [2024-07-15 17:04:42.094969] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:51.861 [2024-07-15 17:04:42.095065] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.118 [2024-07-15 17:04:42.234311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:52.118 [2024-07-15 17:04:42.374456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.118 [2024-07-15 17:04:42.374769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.118 [2024-07-15 17:04:42.375013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.118 [2024-07-15 17:04:42.375224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.118 [2024-07-15 17:04:42.375347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.118 [2024-07-15 17:04:42.375629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.118 [2024-07-15 17:04:42.375738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.118 [2024-07-15 17:04:42.375743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.376 [2024-07-15 17:04:42.438230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:52.941 17:04:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.941 17:04:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:52.941 17:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.941 17:04:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:52.941 17:04:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:52.941 17:04:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.941 17:04:43 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:53.204 [2024-07-15 17:04:43.435020] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.204 17:04:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:53.767 Malloc0 00:15:53.767 17:04:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:53.767 17:04:44 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:54.023 17:04:44 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.281 [2024-07-15 17:04:44.494705] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.281 17:04:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:54.538 [2024-07-15 17:04:44.814959] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:54.832 17:04:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:54.832 [2024-07-15 17:04:45.083193] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:54.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:54.832 17:04:45 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75717 00:15:54.832 17:04:45 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:54.832 17:04:45 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:54.832 17:04:45 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75717 /var/tmp/bdevperf.sock 00:15:54.832 17:04:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75717 ']' 00:15:54.832 17:04:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:54.832 17:04:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.832 17:04:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:54.832 17:04:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.832 17:04:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:56.240 17:04:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.240 17:04:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:56.240 17:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:56.240 NVMe0n1 00:15:56.240 17:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:56.804 00:15:56.804 17:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75740 00:15:56.804 17:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:56.804 17:04:46 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:57.737 17:04:47 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.994 17:04:48 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:01.270 17:04:51 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:01.270 00:16:01.270 17:04:51 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:01.528 17:04:51 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:04.812 17:04:54 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.812 [2024-07-15 17:04:54.986302] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.812 17:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:05.745 17:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:06.003 17:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75740 00:16:12.630 0 00:16:12.630 17:05:01 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75717 00:16:12.630 17:05:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75717 ']' 00:16:12.630 17:05:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75717 00:16:12.630 17:05:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:12.630 17:05:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.630 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75717 00:16:12.630 killing process with pid 75717 00:16:12.630 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:12.630 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:12.630 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75717' 00:16:12.630 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75717 00:16:12.630 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75717 00:16:12.630 17:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:12.630 [2024-07-15 17:04:45.164504] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:16:12.630 [2024-07-15 17:04:45.164645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75717 ] 00:16:12.630 [2024-07-15 17:04:45.310966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.630 [2024-07-15 17:04:45.452449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.630 [2024-07-15 17:04:45.504773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:12.630 Running I/O for 15 seconds... 00:16:12.630 [2024-07-15 17:04:48.110067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.630 [2024-07-15 17:04:48.110173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.630 [2024-07-15 17:04:48.110227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.630 [2024-07-15 17:04:48.110261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.630 [2024-07-15 17:04:48.110293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.630 [2024-07-15 17:04:48.110327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.630 [2024-07-15 17:04:48.110375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.630 [2024-07-15 17:04:48.110409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.630 [2024-07-15 17:04:48.110443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.630 [2024-07-15 17:04:48.110476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.630 [2024-07-15 17:04:48.110508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.630 [2024-07-15 17:04:48.110581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.630 [2024-07-15 17:04:48.110615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.630 [2024-07-15 17:04:48.110632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.110652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.110668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.110683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.110699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.110714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.110731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.110748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.110765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.110781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.110808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.110824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.110843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.110858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.110875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.110890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.110907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.110922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.110939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.110954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.110971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.110986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.111976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.111991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.112008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.112023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.112040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.112055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.631 [2024-07-15 17:04:48.112071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.631 [2024-07-15 17:04:48.112087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.112974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.112997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.632 [2024-07-15 17:04:48.113502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.632 [2024-07-15 17:04:48.113518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.113973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.113988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.633 [2024-07-15 17:04:48.114512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:48.114543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6747c0 is same with the state(5) to be set 00:16:12.633 [2024-07-15 17:04:48.114580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.633 [2024-07-15 17:04:48.114592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.633 [2024-07-15 17:04:48.114604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60568 len:8 PRP1 0x0 PRP2 0x0 00:16:12.633 [2024-07-15 17:04:48.114625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114705] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6747c0 was disconnected and freed. reset controller. 00:16:12.633 [2024-07-15 17:04:48.114727] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:12.633 [2024-07-15 17:04:48.114795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.633 [2024-07-15 17:04:48.114817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.633 [2024-07-15 17:04:48.114849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.633 [2024-07-15 17:04:48.114879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.633 [2024-07-15 17:04:48.114909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:48.114924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:12.633 [2024-07-15 17:04:48.114994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x623570 (9): Bad file descriptor 00:16:12.633 [2024-07-15 17:04:48.118808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:12.633 [2024-07-15 17:04:48.153804] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:12.633 [2024-07-15 17:04:51.744989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:51.745063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:51.745096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:51.745136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:51.745155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.633 [2024-07-15 17:04:51.745170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.633 [2024-07-15 17:04:51.745186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.745202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.745233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.745264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.745295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.745326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.745867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.745899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.745931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.745971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.745988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.746004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.746021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.746036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.746053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.746068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.746085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.746100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.746116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.634 [2024-07-15 17:04:51.746131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.746150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.746165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.746182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.746198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.746215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.746230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.746247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.746261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.746278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.634 [2024-07-15 17:04:51.746293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.634 [2024-07-15 17:04:51.746309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.746324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.746371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.746414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.746446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.746479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.746511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.746543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.746575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.746607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.746639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.746670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.746702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.746734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.746766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.746804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.746843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.746875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.746907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.746938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.746970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.746987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.747002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.747033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.747066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.747098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.747130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.747162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.635 [2024-07-15 17:04:51.747193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.635 [2024-07-15 17:04:51.747764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.635 [2024-07-15 17:04:51.747781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.747796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.747812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.747834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.747851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.747866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.747883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.747897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.747914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.747929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.747946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.747960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.747981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.747996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.748839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.748979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.748996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.749011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.749028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.749043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.749060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.749074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.749091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.636 [2024-07-15 17:04:51.749107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.749123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.749138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.749155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.749170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.749186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.636 [2024-07-15 17:04:51.749201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.636 [2024-07-15 17:04:51.749218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:51.749233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:51.749255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:51.749271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:51.749287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:51.749302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:51.749319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:51.749334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:51.749350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a5d30 is same with the state(5) to be set 00:16:12.637 [2024-07-15 17:04:51.749388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.637 [2024-07-15 17:04:51.749401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.637 [2024-07-15 17:04:51.749413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67584 len:8 PRP1 0x0 PRP2 0x0 00:16:12.637 [2024-07-15 17:04:51.749428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:51.749494] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6a5d30 was disconnected and freed. reset controller. 00:16:12.637 [2024-07-15 17:04:51.749515] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:12.637 [2024-07-15 17:04:51.749573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.637 [2024-07-15 17:04:51.749595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:51.749612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.637 [2024-07-15 17:04:51.749626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:51.749642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.637 [2024-07-15 17:04:51.749657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:51.749672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.637 [2024-07-15 17:04:51.749687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:51.749702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:12.637 [2024-07-15 17:04:51.749752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x623570 (9): Bad file descriptor 00:16:12.637 [2024-07-15 17:04:51.753558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:12.637 [2024-07-15 17:04:51.794690] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:12.637 [2024-07-15 17:04:56.252902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.252974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.637 [2024-07-15 17:04:56.253816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.253977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.253993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.637 [2024-07-15 17:04:56.254008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.637 [2024-07-15 17:04:56.254033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.254112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.254144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.254176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.254209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.254240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.254271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.254303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.254335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.638 [2024-07-15 17:04:56.254948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.254980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.254997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.255028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.255061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.255093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.255124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.255156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.255188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.255220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.255252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.255285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.255317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.638 [2024-07-15 17:04:56.255368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.638 [2024-07-15 17:04:56.255386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.255419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.255451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.255504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.255968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.255983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.256015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.256047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:12.639 [2024-07-15 17:04:56.256629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.256661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.639 [2024-07-15 17:04:56.256693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.639 [2024-07-15 17:04:56.256726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.640 [2024-07-15 17:04:56.256743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.256761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.640 [2024-07-15 17:04:56.256776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.256792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.640 [2024-07-15 17:04:56.256807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.256825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.640 [2024-07-15 17:04:56.256840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.256857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.640 [2024-07-15 17:04:56.256872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.256888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4dd0 is same with the state(5) to be set 00:16:12.640 [2024-07-15 17:04:56.256908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.256919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.256931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2728 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.256946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.256962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.256974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.256986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3248 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3256 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3272 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3280 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3288 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3304 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2736 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2744 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2760 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2768 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2776 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:12.640 [2024-07-15 17:04:56.257817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:12.640 [2024-07-15 17:04:56.257828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2792 len:8 PRP1 0x0 PRP2 0x0 00:16:12.640 [2024-07-15 17:04:56.257843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.257902] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6a4dd0 was disconnected and freed. reset controller. 00:16:12.640 [2024-07-15 17:04:56.257928] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:12.640 [2024-07-15 17:04:56.257986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.640 [2024-07-15 17:04:56.258008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.258025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.640 [2024-07-15 17:04:56.258040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.258055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.640 [2024-07-15 17:04:56.258070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.258085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.640 [2024-07-15 17:04:56.258110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.640 [2024-07-15 17:04:56.258126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:12.640 [2024-07-15 17:04:56.258176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x623570 (9): Bad file descriptor 00:16:12.640 [2024-07-15 17:04:56.262005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:12.640 [2024-07-15 17:04:56.299648] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:12.640 00:16:12.640 Latency(us) 00:16:12.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.640 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:12.640 Verification LBA range: start 0x0 length 0x4000 00:16:12.640 NVMe0n1 : 15.01 8679.17 33.90 235.93 0.00 14324.67 670.25 16562.73 00:16:12.640 =================================================================================================================== 00:16:12.641 Total : 8679.17 33.90 235.93 0.00 14324.67 670.25 16562.73 00:16:12.641 Received shutdown signal, test time was about 15.000000 seconds 00:16:12.641 00:16:12.641 Latency(us) 00:16:12.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.641 =================================================================================================================== 00:16:12.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:12.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75919 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75919 /var/tmp/bdevperf.sock 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75919 ']' 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.641 17:05:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:13.207 17:05:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.207 17:05:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:13.207 17:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:13.464 [2024-07-15 17:05:03.559951] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:13.464 17:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:13.722 [2024-07-15 17:05:03.844251] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:13.722 17:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:13.980 NVMe0n1 00:16:13.980 17:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:14.546 00:16:14.546 17:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:14.804 00:16:14.804 17:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:14.804 17:05:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:15.063 17:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:15.369 17:05:05 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:18.649 17:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:18.649 17:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:18.649 17:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75996 00:16:18.649 17:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:18.649 17:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 75996 00:16:19.607 0 00:16:19.607 17:05:09 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:19.607 [2024-07-15 17:05:02.306635] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:16:19.607 [2024-07-15 17:05:02.306733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75919 ] 00:16:19.607 [2024-07-15 17:05:02.447936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.607 [2024-07-15 17:05:02.579047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.607 [2024-07-15 17:05:02.634864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:19.607 [2024-07-15 17:05:05.400602] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:19.607 [2024-07-15 17:05:05.400710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.607 [2024-07-15 17:05:05.400735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.607 [2024-07-15 17:05:05.400753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.607 [2024-07-15 17:05:05.400767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.607 [2024-07-15 17:05:05.400790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.607 [2024-07-15 17:05:05.400807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.607 [2024-07-15 17:05:05.400822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.607 [2024-07-15 17:05:05.400835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.607 [2024-07-15 17:05:05.400849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:19.607 [2024-07-15 17:05:05.400897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:19.607 [2024-07-15 17:05:05.400929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf54570 (9): Bad file descriptor 00:16:19.607 [2024-07-15 17:05:05.405374] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:19.607 Running I/O for 1 seconds... 00:16:19.607 00:16:19.607 Latency(us) 00:16:19.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.607 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:19.607 Verification LBA range: start 0x0 length 0x4000 00:16:19.607 NVMe0n1 : 1.01 7884.12 30.80 0.00 0.00 16128.99 1690.53 16205.27 00:16:19.607 =================================================================================================================== 00:16:19.607 Total : 7884.12 30.80 0.00 0.00 16128.99 1690.53 16205.27 00:16:19.607 17:05:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:19.607 17:05:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:19.864 17:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:20.121 17:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:20.121 17:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:20.378 17:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:20.941 17:05:10 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:24.244 17:05:13 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:24.244 17:05:13 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 75919 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75919 ']' 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75919 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75919 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:24.244 killing process with pid 75919 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75919' 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75919 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75919 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:24.244 17:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.502 17:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:24.502 17:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:24.502 17:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:24.502 17:05:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.502 17:05:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:16:24.502 17:05:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.502 17:05:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:16:24.502 17:05:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.502 17:05:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.761 rmmod nvme_tcp 00:16:24.761 rmmod nvme_fabrics 00:16:24.761 rmmod nvme_keyring 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75658 ']' 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75658 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75658 ']' 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75658 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75658 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:24.761 killing process with pid 75658 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75658' 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75658 00:16:24.761 17:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75658 00:16:25.020 17:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.020 17:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.020 17:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.020 17:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.020 17:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.020 17:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.020 17:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.020 17:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.020 17:05:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:25.020 ************************************ 00:16:25.020 END TEST nvmf_failover 00:16:25.020 ************************************ 00:16:25.020 00:16:25.020 real 0m33.585s 00:16:25.020 user 2m10.434s 00:16:25.020 sys 0m5.672s 00:16:25.020 17:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:25.020 17:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:25.020 17:05:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:25.020 17:05:15 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:25.020 17:05:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:25.020 17:05:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.020 17:05:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:25.020 ************************************ 00:16:25.020 START TEST nvmf_host_discovery 00:16:25.020 ************************************ 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:25.020 * Looking for test storage... 00:16:25.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.020 17:05:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.021 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:25.293 Cannot find device "nvmf_tgt_br" 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.293 Cannot find device "nvmf_tgt_br2" 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:25.293 Cannot find device "nvmf_tgt_br" 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:25.293 Cannot find device "nvmf_tgt_br2" 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:25.293 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:25.294 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:25.294 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:25.294 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:25.294 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.294 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.294 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.294 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:25.294 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:25.294 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:25.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:16:25.553 00:16:25.553 --- 10.0.0.2 ping statistics --- 00:16:25.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.553 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:25.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:16:25.553 00:16:25.553 --- 10.0.0.3 ping statistics --- 00:16:25.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.553 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:16:25.553 00:16:25.553 --- 10.0.0.1 ping statistics --- 00:16:25.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.553 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76269 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76269 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76269 ']' 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.553 17:05:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.553 [2024-07-15 17:05:15.711462] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:16:25.553 [2024-07-15 17:05:15.711570] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.811 [2024-07-15 17:05:15.852601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.811 [2024-07-15 17:05:15.957743] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.811 [2024-07-15 17:05:15.957793] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.811 [2024-07-15 17:05:15.957804] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.811 [2024-07-15 17:05:15.957812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.811 [2024-07-15 17:05:15.957819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.811 [2024-07-15 17:05:15.957849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.811 [2024-07-15 17:05:16.009980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:26.377 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.377 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:26.377 17:05:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.377 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:26.377 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.635 17:05:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.636 [2024-07-15 17:05:16.702378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.636 [2024-07-15 17:05:16.710493] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.636 null0 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.636 null1 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.636 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76301 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76301 /tmp/host.sock 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76301 ']' 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.636 17:05:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.636 [2024-07-15 17:05:16.797497] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:16:26.636 [2024-07-15 17:05:16.797798] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76301 ] 00:16:26.636 [2024-07-15 17:05:16.932031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.894 [2024-07-15 17:05:17.076489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.894 [2024-07-15 17:05:17.129601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.461 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:27.719 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:27.720 17:05:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.978 [2024-07-15 17:05:18.082855] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.978 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:27.979 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.237 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:16:28.237 17:05:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:16:28.497 [2024-07-15 17:05:18.755531] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:28.497 [2024-07-15 17:05:18.755585] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:28.497 [2024-07-15 17:05:18.755605] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:28.497 [2024-07-15 17:05:18.761581] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:28.756 [2024-07-15 17:05:18.819142] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:28.756 [2024-07-15 17:05:18.819187] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.336 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:29.337 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:29.338 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.338 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.338 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:29.338 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:29.338 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:29.338 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:29.338 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.338 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.338 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 [2024-07-15 17:05:19.648608] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:29.597 [2024-07-15 17:05:19.649406] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:29.597 [2024-07-15 17:05:19.649448] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:29.597 [2024-07-15 17:05:19.655392] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 [2024-07-15 17:05:19.718702] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:29.597 [2024-07-15 17:05:19.718734] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:29.597 [2024-07-15 17:05:19.718743] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 [2024-07-15 17:05:19.865693] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:29.597 [2024-07-15 17:05:19.865734] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:29.597 [2024-07-15 17:05:19.866043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.597 [2024-07-15 17:05:19.866084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.597 [2024-07-15 17:05:19.866098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.597 [2024-07-15 17:05:19.866108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.597 [2024-07-15 17:05:19.866118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.597 [2024-07-15 17:05:19.866127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.597 [2024-07-15 17:05:19.866137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.597 [2024-07-15 17:05:19.866146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.597 [2024-07-15 17:05:19.866155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1937600 is same with the state(5) to be set 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:29.597 [2024-07-15 17:05:19.871685] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:29.597 [2024-07-15 17:05:19.871723] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:29.597 [2024-07-15 17:05:19.871801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1937600 (9): Bad file descriptor 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.597 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:29.855 17:05:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:29.855 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.113 17:05:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.045 [2024-07-15 17:05:21.276204] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:31.045 [2024-07-15 17:05:21.276247] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:31.045 [2024-07-15 17:05:21.276268] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:31.045 [2024-07-15 17:05:21.282244] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:31.045 [2024-07-15 17:05:21.342904] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:31.045 [2024-07-15 17:05:21.342960] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.303 request: 00:16:31.303 { 00:16:31.303 "name": "nvme", 00:16:31.303 "trtype": "tcp", 00:16:31.303 "traddr": "10.0.0.2", 00:16:31.303 "adrfam": "ipv4", 00:16:31.303 "trsvcid": "8009", 00:16:31.303 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:31.303 "wait_for_attach": true, 00:16:31.303 "method": "bdev_nvme_start_discovery", 00:16:31.303 "req_id": 1 00:16:31.303 } 00:16:31.303 Got JSON-RPC error response 00:16:31.303 response: 00:16:31.303 { 00:16:31.303 "code": -17, 00:16:31.303 "message": "File exists" 00:16:31.303 } 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.303 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.304 request: 00:16:31.304 { 00:16:31.304 "name": "nvme_second", 00:16:31.304 "trtype": "tcp", 00:16:31.304 "traddr": "10.0.0.2", 00:16:31.304 "adrfam": "ipv4", 00:16:31.304 "trsvcid": "8009", 00:16:31.304 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:31.304 "wait_for_attach": true, 00:16:31.304 "method": "bdev_nvme_start_discovery", 00:16:31.304 "req_id": 1 00:16:31.304 } 00:16:31.304 Got JSON-RPC error response 00:16:31.304 response: 00:16:31.304 { 00:16:31.304 "code": -17, 00:16:31.304 "message": "File exists" 00:16:31.304 } 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.304 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:31.561 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.561 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:31.561 17:05:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:31.561 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:31.561 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:31.561 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:31.561 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.561 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:31.561 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.562 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:31.562 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.562 17:05:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.518 [2024-07-15 17:05:22.635636] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:32.518 [2024-07-15 17:05:22.635696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1952330 with addr=10.0.0.2, port=8010 00:16:32.518 [2024-07-15 17:05:22.635724] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:32.518 [2024-07-15 17:05:22.635736] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:32.518 [2024-07-15 17:05:22.635745] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:33.453 [2024-07-15 17:05:23.635635] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:33.453 [2024-07-15 17:05:23.635698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1952330 with addr=10.0.0.2, port=8010 00:16:33.453 [2024-07-15 17:05:23.635724] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:33.453 [2024-07-15 17:05:23.635735] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:33.453 [2024-07-15 17:05:23.635745] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:34.386 [2024-07-15 17:05:24.635490] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:34.386 request: 00:16:34.386 { 00:16:34.386 "name": "nvme_second", 00:16:34.386 "trtype": "tcp", 00:16:34.386 "traddr": "10.0.0.2", 00:16:34.386 "adrfam": "ipv4", 00:16:34.386 "trsvcid": "8010", 00:16:34.386 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:34.386 "wait_for_attach": false, 00:16:34.386 "attach_timeout_ms": 3000, 00:16:34.386 "method": "bdev_nvme_start_discovery", 00:16:34.386 "req_id": 1 00:16:34.386 } 00:16:34.386 Got JSON-RPC error response 00:16:34.386 response: 00:16:34.386 { 00:16:34.386 "code": -110, 00:16:34.386 "message": "Connection timed out" 00:16:34.386 } 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:34.386 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76301 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.705 rmmod nvme_tcp 00:16:34.705 rmmod nvme_fabrics 00:16:34.705 rmmod nvme_keyring 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76269 ']' 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76269 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76269 ']' 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76269 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76269 00:16:34.705 killing process with pid 76269 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76269' 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76269 00:16:34.705 17:05:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76269 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:34.963 00:16:34.963 real 0m9.903s 00:16:34.963 user 0m19.034s 00:16:34.963 sys 0m1.919s 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:34.963 ************************************ 00:16:34.963 END TEST nvmf_host_discovery 00:16:34.963 ************************************ 00:16:34.963 17:05:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:34.963 17:05:25 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:34.963 17:05:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:34.963 17:05:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.963 17:05:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.963 ************************************ 00:16:34.963 START TEST nvmf_host_multipath_status 00:16:34.963 ************************************ 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:34.963 * Looking for test storage... 00:16:34.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:34.963 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.964 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:35.222 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:35.223 Cannot find device "nvmf_tgt_br" 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.223 Cannot find device "nvmf_tgt_br2" 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:35.223 Cannot find device "nvmf_tgt_br" 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:35.223 Cannot find device "nvmf_tgt_br2" 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.223 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:35.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:16:35.482 00:16:35.482 --- 10.0.0.2 ping statistics --- 00:16:35.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.482 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:35.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:35.482 00:16:35.482 --- 10.0.0.3 ping statistics --- 00:16:35.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.482 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:35.482 00:16:35.482 --- 10.0.0.1 ping statistics --- 00:16:35.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.482 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:35.482 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76747 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76747 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76747 ']' 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.483 17:05:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:35.483 [2024-07-15 17:05:25.664910] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:16:35.483 [2024-07-15 17:05:25.665002] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.742 [2024-07-15 17:05:25.805173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:35.742 [2024-07-15 17:05:25.923656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.742 [2024-07-15 17:05:25.923930] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.742 [2024-07-15 17:05:25.923951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.742 [2024-07-15 17:05:25.923961] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.742 [2024-07-15 17:05:25.923968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.742 [2024-07-15 17:05:25.924120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.742 [2024-07-15 17:05:25.924217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.742 [2024-07-15 17:05:25.977057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:36.678 17:05:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.678 17:05:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:36.678 17:05:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.678 17:05:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.678 17:05:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:36.678 17:05:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.678 17:05:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76747 00:16:36.678 17:05:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:36.678 [2024-07-15 17:05:26.905275] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.678 17:05:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:36.936 Malloc0 00:16:36.936 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:37.195 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.454 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.713 [2024-07-15 17:05:27.898241] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.713 17:05:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:37.971 [2024-07-15 17:05:28.130374] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:37.971 17:05:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76803 00:16:37.971 17:05:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:37.971 17:05:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:37.971 17:05:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76803 /var/tmp/bdevperf.sock 00:16:37.971 17:05:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76803 ']' 00:16:37.971 17:05:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.971 17:05:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.971 17:05:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.971 17:05:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.971 17:05:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:38.923 17:05:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.923 17:05:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:38.923 17:05:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:39.180 17:05:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:39.746 Nvme0n1 00:16:39.746 17:05:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:40.002 Nvme0n1 00:16:40.002 17:05:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:40.002 17:05:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:41.970 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:41.970 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:42.226 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:42.484 17:05:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:43.418 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:43.418 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:43.418 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.418 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:43.675 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.675 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:43.675 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.675 17:05:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:43.933 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:43.933 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:43.933 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.933 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:44.191 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.192 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:44.192 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.192 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:44.450 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.450 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:44.450 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.450 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:44.709 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.709 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:44.709 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.709 17:05:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:44.968 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.968 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:44.968 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:45.227 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:45.485 17:05:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:46.422 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:46.422 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:46.422 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.422 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:46.680 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.680 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:46.680 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.680 17:05:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:46.939 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.939 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:46.939 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.939 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:47.198 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.198 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:47.198 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.198 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:47.457 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.457 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:47.457 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.457 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:47.714 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.714 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:47.714 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.714 17:05:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:47.970 17:05:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.970 17:05:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:47.970 17:05:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:48.229 17:05:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:48.487 17:05:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:49.861 17:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:49.862 17:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:49.862 17:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:49.862 17:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.862 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.862 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:49.862 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.862 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:50.120 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:50.120 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:50.120 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.120 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:50.377 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.377 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:50.377 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.377 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:50.635 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.635 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:50.635 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.635 17:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:51.202 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.202 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:51.202 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:51.202 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.202 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.202 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:51.202 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:51.461 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:51.719 17:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:53.094 17:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:53.094 17:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:53.094 17:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.094 17:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:53.094 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.094 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:53.094 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.094 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:53.351 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:53.351 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:53.351 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.351 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:53.610 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.610 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:53.610 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.610 17:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:53.867 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.867 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:53.867 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.867 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:54.125 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.125 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:54.125 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.125 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:54.384 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:54.384 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:54.384 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:54.642 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:54.901 17:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:55.835 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:55.836 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:55.836 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.836 17:05:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:56.094 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:56.094 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:56.094 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:56.094 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.353 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:56.353 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:56.353 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:56.353 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.612 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.612 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:56.612 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.612 17:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:56.870 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.870 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:56.870 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.870 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:57.131 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:57.131 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:57.131 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.131 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:57.389 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:57.389 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:57.389 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:57.648 17:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:57.907 17:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:58.842 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:58.842 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:58.842 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.842 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:59.101 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:59.101 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:59.101 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:59.101 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.359 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.359 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:59.359 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.359 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:59.618 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.618 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:59.618 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:59.618 17:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.876 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.876 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:59.876 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:59.876 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.135 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:00.135 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:00.135 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.135 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:00.394 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.394 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:00.652 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:00.652 17:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:00.909 17:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:01.168 17:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:02.543 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:02.543 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:02.544 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.544 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:02.544 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.544 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:02.544 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:02.544 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.802 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.802 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:02.802 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:02.802 17:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.062 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.062 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:03.062 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.062 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:03.320 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.320 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:03.320 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:03.320 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.578 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.578 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:03.578 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.578 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:03.836 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.836 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:03.836 17:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:04.095 17:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:04.358 17:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:05.294 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:05.294 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:05.294 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.294 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:05.552 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:05.552 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:05.552 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.552 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:05.811 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.811 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:05.811 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.811 17:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:06.070 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.070 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:06.070 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:06.070 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.330 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.330 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:06.330 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.330 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:06.589 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.589 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:06.589 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.589 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:06.847 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.847 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:06.847 17:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:07.105 17:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:07.364 17:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:08.332 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:08.332 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:08.332 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.332 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:08.590 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.590 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:08.590 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.590 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:08.849 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.849 17:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:08.849 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.849 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:09.108 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.108 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:09.108 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.108 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:09.366 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.366 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:09.366 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:09.366 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.625 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.625 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:09.625 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:09.625 17:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.883 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.883 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:09.883 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:10.141 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:10.400 17:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:11.337 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:11.337 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:11.337 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.337 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:11.610 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:11.610 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:11.610 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.610 17:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:11.870 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:11.870 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:11.870 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:11.870 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.129 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.129 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:12.129 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.129 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:12.387 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.387 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:12.387 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.387 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:12.657 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.657 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:12.657 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.657 17:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76803 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76803 ']' 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76803 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76803 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:12.915 killing process with pid 76803 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76803' 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76803 00:17:12.915 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76803 00:17:13.186 Connection closed with partial response: 00:17:13.186 00:17:13.186 00:17:13.186 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76803 00:17:13.186 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:13.186 [2024-07-15 17:05:28.198304] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:17:13.187 [2024-07-15 17:05:28.198439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76803 ] 00:17:13.187 [2024-07-15 17:05:28.336057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.187 [2024-07-15 17:05:28.463163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.187 [2024-07-15 17:05:28.517518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:13.187 Running I/O for 90 seconds... 00:17:13.187 [2024-07-15 17:05:44.729477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.187 [2024-07-15 17:05:44.729569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:13.187 [2024-07-15 17:05:44.729631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.187 [2024-07-15 17:05:44.729653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:13.187 [2024-07-15 17:05:44.729675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.187 [2024-07-15 17:05:44.729690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:13.187 [2024-07-15 17:05:44.729712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.187 [2024-07-15 17:05:44.729727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:13.187 [2024-07-15 17:05:44.729749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.187 [2024-07-15 17:05:44.729763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:13.187 [2024-07-15 17:05:44.729784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.187 [2024-07-15 17:05:44.729799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:13.187 [2024-07-15 17:05:44.729820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.187 [2024-07-15 17:05:44.729835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:13.187 [2024-07-15 17:05:44.729856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.187 [2024-07-15 17:05:44.729871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.187 [2024-07-15 17:05:44.729896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.187 [2024-07-15 17:05:44.729913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.187 [2024-07-15 17:05:44.729935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.187 [2024-07-15 17:05:44.729950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.187 [2024-07-15 17:05:44.729971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.187 [2024-07-15 17:05:44.730008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:13.187 [2024-07-15 17:05:44.730032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:13.188 [2024-07-15 17:05:44.730518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.188 [2024-07-15 17:05:44.730533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.730575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.730611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.730647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.730683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.730719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.730755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.730791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.730827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.730881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.730918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.730954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.730985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.731001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.731022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.731037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:13.189 [2024-07-15 17:05:44.731058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.189 [2024-07-15 17:05:44.731073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:13.190 [2024-07-15 17:05:44.731094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.190 [2024-07-15 17:05:44.731109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:13.190 [2024-07-15 17:05:44.731131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.190 [2024-07-15 17:05:44.731145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:13.190 [2024-07-15 17:05:44.731170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.190 [2024-07-15 17:05:44.731185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.190 [2024-07-15 17:05:44.731207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.190 [2024-07-15 17:05:44.731221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.190 [2024-07-15 17:05:44.731243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.190 [2024-07-15 17:05:44.731258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:13.190 [2024-07-15 17:05:44.731279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.190 [2024-07-15 17:05:44.731293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:13.190 [2024-07-15 17:05:44.731315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.190 [2024-07-15 17:05:44.731329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:13.190 [2024-07-15 17:05:44.731350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.190 [2024-07-15 17:05:44.731377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:13.190 [2024-07-15 17:05:44.731399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.190 [2024-07-15 17:05:44.731414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:13.190 [2024-07-15 17:05:44.731444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.191 [2024-07-15 17:05:44.731893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:13.191 [2024-07-15 17:05:44.731914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.192 [2024-07-15 17:05:44.731937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:13.192 [2024-07-15 17:05:44.731960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.192 [2024-07-15 17:05:44.731975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:13.192 [2024-07-15 17:05:44.731997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.192 [2024-07-15 17:05:44.732011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:13.192 [2024-07-15 17:05:44.732033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.192 [2024-07-15 17:05:44.732047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:13.192 [2024-07-15 17:05:44.732069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.192 [2024-07-15 17:05:44.732084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:13.192 [2024-07-15 17:05:44.732281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.192 [2024-07-15 17:05:44.732306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:13.192 [2024-07-15 17:05:44.732336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.192 [2024-07-15 17:05:44.732352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:13.192 [2024-07-15 17:05:44.732395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.192 [2024-07-15 17:05:44.732412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:13.192 [2024-07-15 17:05:44.732438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.192 [2024-07-15 17:05:44.732453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:13.192 [2024-07-15 17:05:44.732479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.192 [2024-07-15 17:05:44.732494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:13.194 [2024-07-15 17:05:44.732521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.194 [2024-07-15 17:05:44.732536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:13.194 [2024-07-15 17:05:44.732562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.194 [2024-07-15 17:05:44.732576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:13.194 [2024-07-15 17:05:44.732602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.194 [2024-07-15 17:05:44.732628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:13.194 [2024-07-15 17:05:44.732660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.194 [2024-07-15 17:05:44.732676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.194 [2024-07-15 17:05:44.732702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.194 [2024-07-15 17:05:44.732717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.194 [2024-07-15 17:05:44.732743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.194 [2024-07-15 17:05:44.732759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:13.194 [2024-07-15 17:05:44.732785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.194 [2024-07-15 17:05:44.732800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:13.194 [2024-07-15 17:05:44.732827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.194 [2024-07-15 17:05:44.732841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:13.194 [2024-07-15 17:05:44.732868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.194 [2024-07-15 17:05:44.732883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:13.194 [2024-07-15 17:05:44.732910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.194 [2024-07-15 17:05:44.732924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:13.195 [2024-07-15 17:05:44.732951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.195 [2024-07-15 17:05:44.732975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:13.195 [2024-07-15 17:05:44.733024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.195 [2024-07-15 17:05:44.733042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:13.195 [2024-07-15 17:05:44.733070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.195 [2024-07-15 17:05:44.733085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:13.195 [2024-07-15 17:05:44.733112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.195 [2024-07-15 17:05:44.733126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:13.195 [2024-07-15 17:05:44.733153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.195 [2024-07-15 17:05:44.733167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:13.195 [2024-07-15 17:05:44.733204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.195 [2024-07-15 17:05:44.733220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:13.195 [2024-07-15 17:05:44.733246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.195 [2024-07-15 17:05:44.733261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:13.195 [2024-07-15 17:05:44.733296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.195 [2024-07-15 17:05:44.733312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:13.195 [2024-07-15 17:05:44.733339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.195 [2024-07-15 17:05:44.733367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:13.195 [2024-07-15 17:05:44.733400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.195 [2024-07-15 17:05:44.733417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:13.195 [2024-07-15 17:05:44.733443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.733977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.196 [2024-07-15 17:05:44.733992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:13.196 [2024-07-15 17:05:44.734023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.197 [2024-07-15 17:05:44.734038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:13.197 [2024-07-15 17:05:44.734064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.197 [2024-07-15 17:05:44.734079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:13.197 [2024-07-15 17:05:44.734285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.197 [2024-07-15 17:05:44.734311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.197 [2024-07-15 17:05:44.734344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.197 [2024-07-15 17:05:44.734379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.197 [2024-07-15 17:05:44.734411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.197 [2024-07-15 17:05:44.734426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:13.197 [2024-07-15 17:05:44.734455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.197 [2024-07-15 17:05:44.734471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:13.197 [2024-07-15 17:05:44.734500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.198 [2024-07-15 17:05:44.734526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:13.198 [2024-07-15 17:05:44.734557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.198 [2024-07-15 17:05:44.734572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:13.198 [2024-07-15 17:05:44.734602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.198 [2024-07-15 17:05:44.734617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:13.198 [2024-07-15 17:05:44.734647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.198 [2024-07-15 17:05:44.734662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:13.198 [2024-07-15 17:05:44.734699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.198 [2024-07-15 17:05:44.734715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:13.198 [2024-07-15 17:05:44.734744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.198 [2024-07-15 17:05:44.734759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:13.198 [2024-07-15 17:05:44.734789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.198 [2024-07-15 17:05:44.734803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:13.198 [2024-07-15 17:05:44.734833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.198 [2024-07-15 17:05:44.734848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:13.198 [2024-07-15 17:05:44.734877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:05:44.734892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:05:44.734921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:05:44.734936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:05:44.734970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:05:44.734986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:05:44.735016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:05:44.735031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:05:44.735060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:05:44.735083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:05:44.735113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:05:44.735128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:05:44.735157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:05:44.735172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:05:44.735201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:05:44.735216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:05:44.735246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:05:44.735261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:05:44.735290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:05:44.735305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:05:44.735334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.199 [2024-07-15 17:05:44.735349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:05:44.735406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.199 [2024-07-15 17:05:44.735426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:06:00.515695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.199 [2024-07-15 17:06:00.515758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:06:00.515812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.199 [2024-07-15 17:06:00.515833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:06:00.515855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.199 [2024-07-15 17:06:00.515870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:06:00.515892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.199 [2024-07-15 17:06:00.515907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:06:00.515928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.199 [2024-07-15 17:06:00.515943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:06:00.515993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:06:00.516009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:06:00.516031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.199 [2024-07-15 17:06:00.516045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:13.199 [2024-07-15 17:06:00.516066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.200 [2024-07-15 17:06:00.516080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.200 [2024-07-15 17:06:00.516115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.200 [2024-07-15 17:06:00.516151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.200 [2024-07-15 17:06:00.516185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.200 [2024-07-15 17:06:00.516221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.200 [2024-07-15 17:06:00.516256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.200 [2024-07-15 17:06:00.516291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.200 [2024-07-15 17:06:00.516326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.200 [2024-07-15 17:06:00.516375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.200 [2024-07-15 17:06:00.516413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.200 [2024-07-15 17:06:00.516462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.200 [2024-07-15 17:06:00.516499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.200 [2024-07-15 17:06:00.516535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:13.200 [2024-07-15 17:06:00.516562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.201 [2024-07-15 17:06:00.516576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:13.201 [2024-07-15 17:06:00.516597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.201 [2024-07-15 17:06:00.516611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.201 [2024-07-15 17:06:00.516632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.201 [2024-07-15 17:06:00.516646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.201 [2024-07-15 17:06:00.516667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.201 [2024-07-15 17:06:00.516681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.201 [2024-07-15 17:06:00.516702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.201 [2024-07-15 17:06:00.516717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:13.201 [2024-07-15 17:06:00.516738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.201 [2024-07-15 17:06:00.516752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:13.201 [2024-07-15 17:06:00.516773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.201 [2024-07-15 17:06:00.516787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:13.201 [2024-07-15 17:06:00.516808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.201 [2024-07-15 17:06:00.516822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:13.201 [2024-07-15 17:06:00.516843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.201 [2024-07-15 17:06:00.516858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:13.201 [2024-07-15 17:06:00.516879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.201 [2024-07-15 17:06:00.516900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:13.201 [2024-07-15 17:06:00.516922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.201 [2024-07-15 17:06:00.516937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:13.201 [2024-07-15 17:06:00.516958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.202 [2024-07-15 17:06:00.516972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:13.202 [2024-07-15 17:06:00.516993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.202 [2024-07-15 17:06:00.517007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:13.202 [2024-07-15 17:06:00.517029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.202 [2024-07-15 17:06:00.517044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:13.202 [2024-07-15 17:06:00.517065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.202 [2024-07-15 17:06:00.517079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:13.202 [2024-07-15 17:06:00.517100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.202 [2024-07-15 17:06:00.517114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:13.202 [2024-07-15 17:06:00.517135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.202 [2024-07-15 17:06:00.517149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:13.202 [2024-07-15 17:06:00.517170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.204 [2024-07-15 17:06:00.517220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.204 [2024-07-15 17:06:00.517255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.204 [2024-07-15 17:06:00.517291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.204 [2024-07-15 17:06:00.517598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.204 [2024-07-15 17:06:00.517634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.204 [2024-07-15 17:06:00.517878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.204 [2024-07-15 17:06:00.517914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.517971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.204 [2024-07-15 17:06:00.517985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:13.204 [2024-07-15 17:06:00.518006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.204 [2024-07-15 17:06:00.518021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.518041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.205 [2024-07-15 17:06:00.518055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.518077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.205 [2024-07-15 17:06:00.518091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.518112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.205 [2024-07-15 17:06:00.518126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.518147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.205 [2024-07-15 17:06:00.518163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.518195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.205 [2024-07-15 17:06:00.518219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.518241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.205 [2024-07-15 17:06:00.518256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.518287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.205 [2024-07-15 17:06:00.518302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.518324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.205 [2024-07-15 17:06:00.518338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.518372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.205 [2024-07-15 17:06:00.518389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.518412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.205 [2024-07-15 17:06:00.518426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.519529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.205 [2024-07-15 17:06:00.519558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.519587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.205 [2024-07-15 17:06:00.519603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.519625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.205 [2024-07-15 17:06:00.519640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.519661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.205 [2024-07-15 17:06:00.519675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.519696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.205 [2024-07-15 17:06:00.519711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.519732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.205 [2024-07-15 17:06:00.519746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.519768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.205 [2024-07-15 17:06:00.519782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.519803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.205 [2024-07-15 17:06:00.519817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.519850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.205 [2024-07-15 17:06:00.519866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.519888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.205 [2024-07-15 17:06:00.519903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:13.205 [2024-07-15 17:06:00.519930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:13.205 [2024-07-15 17:06:00.519946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:13.205 Received shutdown signal, test time was about 32.882181 seconds 00:17:13.205 00:17:13.205 Latency(us) 00:17:13.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.205 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:13.205 Verification LBA range: start 0x0 length 0x4000 00:17:13.205 Nvme0n1 : 32.88 7554.32 29.51 0.00 0.00 16911.41 448.70 4026531.84 00:17:13.205 =================================================================================================================== 00:17:13.205 Total : 7554.32 29.51 0.00 0.00 16911.41 448.70 4026531.84 00:17:13.205 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.463 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:13.463 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:13.463 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:13.463 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.463 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.721 rmmod nvme_tcp 00:17:13.721 rmmod nvme_fabrics 00:17:13.721 rmmod nvme_keyring 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76747 ']' 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76747 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76747 ']' 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76747 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76747 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:13.721 killing process with pid 76747 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76747' 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76747 00:17:13.721 17:06:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76747 00:17:13.979 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.979 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.979 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.979 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.979 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.979 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.979 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.979 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.979 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:13.979 00:17:13.979 real 0m38.974s 00:17:13.979 user 2m5.624s 00:17:13.979 sys 0m11.713s 00:17:13.979 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.979 17:06:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:13.979 ************************************ 00:17:13.979 END TEST nvmf_host_multipath_status 00:17:13.979 ************************************ 00:17:13.979 17:06:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:13.979 17:06:04 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:13.979 17:06:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:13.979 17:06:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.979 17:06:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:13.979 ************************************ 00:17:13.979 START TEST nvmf_discovery_remove_ifc 00:17:13.979 ************************************ 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:13.979 * Looking for test storage... 00:17:13.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.979 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.237 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:17:14.237 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:17:14.237 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.237 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.237 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:14.237 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.237 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:14.238 Cannot find device "nvmf_tgt_br" 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:14.238 Cannot find device "nvmf_tgt_br2" 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:14.238 Cannot find device "nvmf_tgt_br" 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:14.238 Cannot find device "nvmf_tgt_br2" 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.238 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.238 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:14.238 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:14.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:17:14.495 00:17:14.495 --- 10.0.0.2 ping statistics --- 00:17:14.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.495 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:14.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:14.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:14.495 00:17:14.495 --- 10.0.0.3 ping statistics --- 00:17:14.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.495 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:14.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:14.495 00:17:14.495 --- 10.0.0.1 ping statistics --- 00:17:14.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.495 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77587 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77587 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77587 ']' 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.495 17:06:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.495 [2024-07-15 17:06:04.732650] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:17:14.495 [2024-07-15 17:06:04.732768] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.753 [2024-07-15 17:06:04.876966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.753 [2024-07-15 17:06:05.002749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.753 [2024-07-15 17:06:05.002833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.753 [2024-07-15 17:06:05.002858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.753 [2024-07-15 17:06:05.002869] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.753 [2024-07-15 17:06:05.002878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.753 [2024-07-15 17:06:05.002908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.010 [2024-07-15 17:06:05.060351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.577 [2024-07-15 17:06:05.663826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.577 [2024-07-15 17:06:05.671962] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:15.577 null0 00:17:15.577 [2024-07-15 17:06:05.704436] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77619 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77619 /tmp/host.sock 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77619 ']' 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.577 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.577 17:06:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.577 [2024-07-15 17:06:05.776406] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:17:15.577 [2024-07-15 17:06:05.776494] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77619 ] 00:17:15.836 [2024-07-15 17:06:05.914848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.836 [2024-07-15 17:06:06.042177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.771 [2024-07-15 17:06:06.862256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.771 17:06:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.705 [2024-07-15 17:06:07.916920] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:17.705 [2024-07-15 17:06:07.916993] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:17.705 [2024-07-15 17:06:07.917013] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:17.705 [2024-07-15 17:06:07.922977] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:17.705 [2024-07-15 17:06:07.980406] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:17.705 [2024-07-15 17:06:07.980492] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:17.705 [2024-07-15 17:06:07.980524] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:17.705 [2024-07-15 17:06:07.980547] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:17.705 [2024-07-15 17:06:07.980576] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:17.705 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.705 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:17.705 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:17.705 [2024-07-15 17:06:07.985607] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2279de0 was disconnected and freed. delete nvme_qpair. 00:17:17.705 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.705 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:17.705 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.705 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.705 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:17.705 17:06:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:17.963 17:06:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:18.960 17:06:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:18.960 17:06:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.960 17:06:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:18.960 17:06:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.960 17:06:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:18.960 17:06:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:18.960 17:06:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:18.960 17:06:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.960 17:06:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:18.960 17:06:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:19.894 17:06:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:20.151 17:06:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:20.151 17:06:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.151 17:06:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:20.151 17:06:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:20.151 17:06:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:20.151 17:06:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:20.151 17:06:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.151 17:06:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:20.151 17:06:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:21.082 17:06:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:21.082 17:06:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:21.082 17:06:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:21.082 17:06:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.082 17:06:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:21.082 17:06:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:21.082 17:06:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:21.082 17:06:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.082 17:06:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:21.082 17:06:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:22.037 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:22.037 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:22.037 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:22.037 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:22.037 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.037 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:22.037 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:22.295 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.295 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:22.295 17:06:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:23.227 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:23.227 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:23.227 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.227 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:23.227 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:23.227 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:23.227 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:23.227 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.227 [2024-07-15 17:06:13.408214] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:23.227 [2024-07-15 17:06:13.408303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.227 [2024-07-15 17:06:13.408321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.227 [2024-07-15 17:06:13.408335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.227 [2024-07-15 17:06:13.408345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.227 [2024-07-15 17:06:13.408365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.227 [2024-07-15 17:06:13.408376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.227 [2024-07-15 17:06:13.408386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.227 [2024-07-15 17:06:13.408397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.227 [2024-07-15 17:06:13.408407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.227 [2024-07-15 17:06:13.408416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.227 [2024-07-15 17:06:13.408426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dfac0 is same with the state(5) to be set 00:17:23.227 [2024-07-15 17:06:13.418204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21dfac0 (9): Bad file descriptor 00:17:23.227 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:23.227 17:06:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:23.227 [2024-07-15 17:06:13.428229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:24.163 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:24.163 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:24.163 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:24.163 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.163 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:24.163 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:24.163 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:24.421 [2024-07-15 17:06:14.482507] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:24.421 [2024-07-15 17:06:14.482638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21dfac0 with addr=10.0.0.2, port=4420 00:17:24.422 [2024-07-15 17:06:14.482677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dfac0 is same with the state(5) to be set 00:17:24.422 [2024-07-15 17:06:14.482750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21dfac0 (9): Bad file descriptor 00:17:24.422 [2024-07-15 17:06:14.483742] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:24.422 [2024-07-15 17:06:14.483814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:24.422 [2024-07-15 17:06:14.483837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:24.422 [2024-07-15 17:06:14.483860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:24.422 [2024-07-15 17:06:14.483928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:24.422 [2024-07-15 17:06:14.483954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:24.422 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.422 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:24.422 17:06:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:25.354 [2024-07-15 17:06:15.484026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:25.354 [2024-07-15 17:06:15.484121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:25.354 [2024-07-15 17:06:15.484134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:25.354 [2024-07-15 17:06:15.484145] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:25.354 [2024-07-15 17:06:15.484171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:25.354 [2024-07-15 17:06:15.484201] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:25.354 [2024-07-15 17:06:15.484262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.354 [2024-07-15 17:06:15.484279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.354 [2024-07-15 17:06:15.484293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.354 [2024-07-15 17:06:15.484303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.354 [2024-07-15 17:06:15.484313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.354 [2024-07-15 17:06:15.484322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.354 [2024-07-15 17:06:15.484332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.354 [2024-07-15 17:06:15.484341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.354 [2024-07-15 17:06:15.484352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.354 [2024-07-15 17:06:15.484374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.354 [2024-07-15 17:06:15.484384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:25.354 [2024-07-15 17:06:15.484999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e3860 (9): Bad file descriptor 00:17:25.354 [2024-07-15 17:06:15.486018] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:25.354 [2024-07-15 17:06:15.486041] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:25.354 17:06:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:26.739 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:26.740 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:26.740 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:26.740 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.740 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:26.740 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:26.740 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:26.740 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.740 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:26.740 17:06:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:27.304 [2024-07-15 17:06:17.491642] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:27.304 [2024-07-15 17:06:17.491869] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:27.304 [2024-07-15 17:06:17.491933] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:27.304 [2024-07-15 17:06:17.497681] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:27.304 [2024-07-15 17:06:17.554195] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:27.304 [2024-07-15 17:06:17.554415] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:27.304 [2024-07-15 17:06:17.554454] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:27.304 [2024-07-15 17:06:17.554474] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:27.304 [2024-07-15 17:06:17.554483] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:27.304 [2024-07-15 17:06:17.560393] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2286d90 was disconnected and freed. delete nvme_qpair. 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77619 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77619 ']' 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77619 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77619 00:17:27.562 killing process with pid 77619 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77619' 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77619 00:17:27.562 17:06:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77619 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:27.820 rmmod nvme_tcp 00:17:27.820 rmmod nvme_fabrics 00:17:27.820 rmmod nvme_keyring 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77587 ']' 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77587 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77587 ']' 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77587 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:27.820 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77587 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77587' 00:17:28.079 killing process with pid 77587 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77587 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77587 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.079 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.338 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:28.338 ************************************ 00:17:28.338 END TEST nvmf_discovery_remove_ifc 00:17:28.338 ************************************ 00:17:28.338 00:17:28.338 real 0m14.215s 00:17:28.338 user 0m24.514s 00:17:28.338 sys 0m2.585s 00:17:28.338 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:28.338 17:06:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:28.338 17:06:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:28.338 17:06:18 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:28.338 17:06:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:28.338 17:06:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.338 17:06:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.338 ************************************ 00:17:28.338 START TEST nvmf_identify_kernel_target 00:17:28.338 ************************************ 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:28.338 * Looking for test storage... 00:17:28.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:28.338 Cannot find device "nvmf_tgt_br" 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.338 Cannot find device "nvmf_tgt_br2" 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:28.338 Cannot find device "nvmf_tgt_br" 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:28.338 Cannot find device "nvmf_tgt_br2" 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:28.338 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.596 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:28.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:17:28.597 00:17:28.597 --- 10.0.0.2 ping statistics --- 00:17:28.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.597 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:28.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:17:28.597 00:17:28.597 --- 10.0.0.3 ping statistics --- 00:17:28.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.597 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:28.597 00:17:28.597 --- 10.0.0.1 ping statistics --- 00:17:28.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.597 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.597 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.854 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:28.854 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:28.854 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:28.855 17:06:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:29.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:29.170 Waiting for block devices as requested 00:17:29.170 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:29.170 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:29.431 No valid GPT data, bailing 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:29.431 No valid GPT data, bailing 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:29.431 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:29.432 No valid GPT data, bailing 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:29.432 No valid GPT data, bailing 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:29.432 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:29.692 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid=0b4e8503-7bac-4879-926a-209303c4b3da -a 10.0.0.1 -t tcp -s 4420 00:17:29.692 00:17:29.693 Discovery Log Number of Records 2, Generation counter 2 00:17:29.693 =====Discovery Log Entry 0====== 00:17:29.693 trtype: tcp 00:17:29.693 adrfam: ipv4 00:17:29.693 subtype: current discovery subsystem 00:17:29.693 treq: not specified, sq flow control disable supported 00:17:29.693 portid: 1 00:17:29.693 trsvcid: 4420 00:17:29.693 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:29.693 traddr: 10.0.0.1 00:17:29.693 eflags: none 00:17:29.693 sectype: none 00:17:29.693 =====Discovery Log Entry 1====== 00:17:29.693 trtype: tcp 00:17:29.693 adrfam: ipv4 00:17:29.693 subtype: nvme subsystem 00:17:29.693 treq: not specified, sq flow control disable supported 00:17:29.693 portid: 1 00:17:29.693 trsvcid: 4420 00:17:29.693 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:29.693 traddr: 10.0.0.1 00:17:29.693 eflags: none 00:17:29.693 sectype: none 00:17:29.693 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:29.693 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:29.693 ===================================================== 00:17:29.693 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:29.693 ===================================================== 00:17:29.693 Controller Capabilities/Features 00:17:29.693 ================================ 00:17:29.693 Vendor ID: 0000 00:17:29.693 Subsystem Vendor ID: 0000 00:17:29.693 Serial Number: f7de21206ced5a466364 00:17:29.693 Model Number: Linux 00:17:29.693 Firmware Version: 6.7.0-68 00:17:29.693 Recommended Arb Burst: 0 00:17:29.693 IEEE OUI Identifier: 00 00 00 00:17:29.693 Multi-path I/O 00:17:29.693 May have multiple subsystem ports: No 00:17:29.693 May have multiple controllers: No 00:17:29.693 Associated with SR-IOV VF: No 00:17:29.693 Max Data Transfer Size: Unlimited 00:17:29.693 Max Number of Namespaces: 0 00:17:29.693 Max Number of I/O Queues: 1024 00:17:29.693 NVMe Specification Version (VS): 1.3 00:17:29.693 NVMe Specification Version (Identify): 1.3 00:17:29.693 Maximum Queue Entries: 1024 00:17:29.693 Contiguous Queues Required: No 00:17:29.693 Arbitration Mechanisms Supported 00:17:29.693 Weighted Round Robin: Not Supported 00:17:29.693 Vendor Specific: Not Supported 00:17:29.693 Reset Timeout: 7500 ms 00:17:29.693 Doorbell Stride: 4 bytes 00:17:29.693 NVM Subsystem Reset: Not Supported 00:17:29.693 Command Sets Supported 00:17:29.693 NVM Command Set: Supported 00:17:29.693 Boot Partition: Not Supported 00:17:29.693 Memory Page Size Minimum: 4096 bytes 00:17:29.693 Memory Page Size Maximum: 4096 bytes 00:17:29.693 Persistent Memory Region: Not Supported 00:17:29.693 Optional Asynchronous Events Supported 00:17:29.693 Namespace Attribute Notices: Not Supported 00:17:29.693 Firmware Activation Notices: Not Supported 00:17:29.693 ANA Change Notices: Not Supported 00:17:29.693 PLE Aggregate Log Change Notices: Not Supported 00:17:29.693 LBA Status Info Alert Notices: Not Supported 00:17:29.693 EGE Aggregate Log Change Notices: Not Supported 00:17:29.693 Normal NVM Subsystem Shutdown event: Not Supported 00:17:29.693 Zone Descriptor Change Notices: Not Supported 00:17:29.693 Discovery Log Change Notices: Supported 00:17:29.693 Controller Attributes 00:17:29.693 128-bit Host Identifier: Not Supported 00:17:29.693 Non-Operational Permissive Mode: Not Supported 00:17:29.693 NVM Sets: Not Supported 00:17:29.693 Read Recovery Levels: Not Supported 00:17:29.693 Endurance Groups: Not Supported 00:17:29.693 Predictable Latency Mode: Not Supported 00:17:29.693 Traffic Based Keep ALive: Not Supported 00:17:29.693 Namespace Granularity: Not Supported 00:17:29.693 SQ Associations: Not Supported 00:17:29.693 UUID List: Not Supported 00:17:29.693 Multi-Domain Subsystem: Not Supported 00:17:29.693 Fixed Capacity Management: Not Supported 00:17:29.693 Variable Capacity Management: Not Supported 00:17:29.693 Delete Endurance Group: Not Supported 00:17:29.693 Delete NVM Set: Not Supported 00:17:29.693 Extended LBA Formats Supported: Not Supported 00:17:29.693 Flexible Data Placement Supported: Not Supported 00:17:29.693 00:17:29.693 Controller Memory Buffer Support 00:17:29.693 ================================ 00:17:29.693 Supported: No 00:17:29.693 00:17:29.693 Persistent Memory Region Support 00:17:29.693 ================================ 00:17:29.693 Supported: No 00:17:29.693 00:17:29.693 Admin Command Set Attributes 00:17:29.693 ============================ 00:17:29.693 Security Send/Receive: Not Supported 00:17:29.693 Format NVM: Not Supported 00:17:29.693 Firmware Activate/Download: Not Supported 00:17:29.693 Namespace Management: Not Supported 00:17:29.693 Device Self-Test: Not Supported 00:17:29.693 Directives: Not Supported 00:17:29.693 NVMe-MI: Not Supported 00:17:29.693 Virtualization Management: Not Supported 00:17:29.693 Doorbell Buffer Config: Not Supported 00:17:29.693 Get LBA Status Capability: Not Supported 00:17:29.693 Command & Feature Lockdown Capability: Not Supported 00:17:29.693 Abort Command Limit: 1 00:17:29.693 Async Event Request Limit: 1 00:17:29.693 Number of Firmware Slots: N/A 00:17:29.693 Firmware Slot 1 Read-Only: N/A 00:17:29.693 Firmware Activation Without Reset: N/A 00:17:29.693 Multiple Update Detection Support: N/A 00:17:29.693 Firmware Update Granularity: No Information Provided 00:17:29.693 Per-Namespace SMART Log: No 00:17:29.693 Asymmetric Namespace Access Log Page: Not Supported 00:17:29.693 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:29.693 Command Effects Log Page: Not Supported 00:17:29.693 Get Log Page Extended Data: Supported 00:17:29.693 Telemetry Log Pages: Not Supported 00:17:29.693 Persistent Event Log Pages: Not Supported 00:17:29.693 Supported Log Pages Log Page: May Support 00:17:29.693 Commands Supported & Effects Log Page: Not Supported 00:17:29.693 Feature Identifiers & Effects Log Page:May Support 00:17:29.693 NVMe-MI Commands & Effects Log Page: May Support 00:17:29.693 Data Area 4 for Telemetry Log: Not Supported 00:17:29.693 Error Log Page Entries Supported: 1 00:17:29.693 Keep Alive: Not Supported 00:17:29.693 00:17:29.693 NVM Command Set Attributes 00:17:29.693 ========================== 00:17:29.693 Submission Queue Entry Size 00:17:29.693 Max: 1 00:17:29.693 Min: 1 00:17:29.693 Completion Queue Entry Size 00:17:29.693 Max: 1 00:17:29.693 Min: 1 00:17:29.693 Number of Namespaces: 0 00:17:29.693 Compare Command: Not Supported 00:17:29.693 Write Uncorrectable Command: Not Supported 00:17:29.693 Dataset Management Command: Not Supported 00:17:29.693 Write Zeroes Command: Not Supported 00:17:29.693 Set Features Save Field: Not Supported 00:17:29.693 Reservations: Not Supported 00:17:29.693 Timestamp: Not Supported 00:17:29.693 Copy: Not Supported 00:17:29.693 Volatile Write Cache: Not Present 00:17:29.693 Atomic Write Unit (Normal): 1 00:17:29.693 Atomic Write Unit (PFail): 1 00:17:29.693 Atomic Compare & Write Unit: 1 00:17:29.693 Fused Compare & Write: Not Supported 00:17:29.693 Scatter-Gather List 00:17:29.693 SGL Command Set: Supported 00:17:29.693 SGL Keyed: Not Supported 00:17:29.693 SGL Bit Bucket Descriptor: Not Supported 00:17:29.693 SGL Metadata Pointer: Not Supported 00:17:29.693 Oversized SGL: Not Supported 00:17:29.693 SGL Metadata Address: Not Supported 00:17:29.693 SGL Offset: Supported 00:17:29.693 Transport SGL Data Block: Not Supported 00:17:29.693 Replay Protected Memory Block: Not Supported 00:17:29.693 00:17:29.693 Firmware Slot Information 00:17:29.693 ========================= 00:17:29.693 Active slot: 0 00:17:29.693 00:17:29.693 00:17:29.693 Error Log 00:17:29.693 ========= 00:17:29.693 00:17:29.693 Active Namespaces 00:17:29.693 ================= 00:17:29.693 Discovery Log Page 00:17:29.693 ================== 00:17:29.693 Generation Counter: 2 00:17:29.693 Number of Records: 2 00:17:29.693 Record Format: 0 00:17:29.693 00:17:29.693 Discovery Log Entry 0 00:17:29.693 ---------------------- 00:17:29.693 Transport Type: 3 (TCP) 00:17:29.693 Address Family: 1 (IPv4) 00:17:29.693 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:29.693 Entry Flags: 00:17:29.693 Duplicate Returned Information: 0 00:17:29.693 Explicit Persistent Connection Support for Discovery: 0 00:17:29.693 Transport Requirements: 00:17:29.693 Secure Channel: Not Specified 00:17:29.693 Port ID: 1 (0x0001) 00:17:29.693 Controller ID: 65535 (0xffff) 00:17:29.693 Admin Max SQ Size: 32 00:17:29.693 Transport Service Identifier: 4420 00:17:29.693 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:29.693 Transport Address: 10.0.0.1 00:17:29.694 Discovery Log Entry 1 00:17:29.694 ---------------------- 00:17:29.694 Transport Type: 3 (TCP) 00:17:29.694 Address Family: 1 (IPv4) 00:17:29.694 Subsystem Type: 2 (NVM Subsystem) 00:17:29.694 Entry Flags: 00:17:29.694 Duplicate Returned Information: 0 00:17:29.694 Explicit Persistent Connection Support for Discovery: 0 00:17:29.694 Transport Requirements: 00:17:29.694 Secure Channel: Not Specified 00:17:29.694 Port ID: 1 (0x0001) 00:17:29.694 Controller ID: 65535 (0xffff) 00:17:29.694 Admin Max SQ Size: 32 00:17:29.694 Transport Service Identifier: 4420 00:17:29.694 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:29.694 Transport Address: 10.0.0.1 00:17:29.694 17:06:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:29.953 get_feature(0x01) failed 00:17:29.953 get_feature(0x02) failed 00:17:29.953 get_feature(0x04) failed 00:17:29.953 ===================================================== 00:17:29.953 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:29.953 ===================================================== 00:17:29.953 Controller Capabilities/Features 00:17:29.953 ================================ 00:17:29.953 Vendor ID: 0000 00:17:29.953 Subsystem Vendor ID: 0000 00:17:29.953 Serial Number: 0e7a2a134c2bef51a42c 00:17:29.953 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:29.953 Firmware Version: 6.7.0-68 00:17:29.953 Recommended Arb Burst: 6 00:17:29.953 IEEE OUI Identifier: 00 00 00 00:17:29.953 Multi-path I/O 00:17:29.953 May have multiple subsystem ports: Yes 00:17:29.953 May have multiple controllers: Yes 00:17:29.953 Associated with SR-IOV VF: No 00:17:29.953 Max Data Transfer Size: Unlimited 00:17:29.953 Max Number of Namespaces: 1024 00:17:29.953 Max Number of I/O Queues: 128 00:17:29.953 NVMe Specification Version (VS): 1.3 00:17:29.953 NVMe Specification Version (Identify): 1.3 00:17:29.953 Maximum Queue Entries: 1024 00:17:29.953 Contiguous Queues Required: No 00:17:29.953 Arbitration Mechanisms Supported 00:17:29.953 Weighted Round Robin: Not Supported 00:17:29.953 Vendor Specific: Not Supported 00:17:29.953 Reset Timeout: 7500 ms 00:17:29.953 Doorbell Stride: 4 bytes 00:17:29.953 NVM Subsystem Reset: Not Supported 00:17:29.953 Command Sets Supported 00:17:29.953 NVM Command Set: Supported 00:17:29.953 Boot Partition: Not Supported 00:17:29.953 Memory Page Size Minimum: 4096 bytes 00:17:29.953 Memory Page Size Maximum: 4096 bytes 00:17:29.953 Persistent Memory Region: Not Supported 00:17:29.953 Optional Asynchronous Events Supported 00:17:29.953 Namespace Attribute Notices: Supported 00:17:29.953 Firmware Activation Notices: Not Supported 00:17:29.953 ANA Change Notices: Supported 00:17:29.953 PLE Aggregate Log Change Notices: Not Supported 00:17:29.953 LBA Status Info Alert Notices: Not Supported 00:17:29.953 EGE Aggregate Log Change Notices: Not Supported 00:17:29.953 Normal NVM Subsystem Shutdown event: Not Supported 00:17:29.953 Zone Descriptor Change Notices: Not Supported 00:17:29.953 Discovery Log Change Notices: Not Supported 00:17:29.953 Controller Attributes 00:17:29.953 128-bit Host Identifier: Supported 00:17:29.953 Non-Operational Permissive Mode: Not Supported 00:17:29.953 NVM Sets: Not Supported 00:17:29.953 Read Recovery Levels: Not Supported 00:17:29.953 Endurance Groups: Not Supported 00:17:29.953 Predictable Latency Mode: Not Supported 00:17:29.953 Traffic Based Keep ALive: Supported 00:17:29.953 Namespace Granularity: Not Supported 00:17:29.953 SQ Associations: Not Supported 00:17:29.953 UUID List: Not Supported 00:17:29.953 Multi-Domain Subsystem: Not Supported 00:17:29.953 Fixed Capacity Management: Not Supported 00:17:29.953 Variable Capacity Management: Not Supported 00:17:29.953 Delete Endurance Group: Not Supported 00:17:29.953 Delete NVM Set: Not Supported 00:17:29.953 Extended LBA Formats Supported: Not Supported 00:17:29.953 Flexible Data Placement Supported: Not Supported 00:17:29.953 00:17:29.953 Controller Memory Buffer Support 00:17:29.953 ================================ 00:17:29.953 Supported: No 00:17:29.953 00:17:29.953 Persistent Memory Region Support 00:17:29.953 ================================ 00:17:29.953 Supported: No 00:17:29.953 00:17:29.953 Admin Command Set Attributes 00:17:29.953 ============================ 00:17:29.953 Security Send/Receive: Not Supported 00:17:29.953 Format NVM: Not Supported 00:17:29.953 Firmware Activate/Download: Not Supported 00:17:29.953 Namespace Management: Not Supported 00:17:29.953 Device Self-Test: Not Supported 00:17:29.953 Directives: Not Supported 00:17:29.953 NVMe-MI: Not Supported 00:17:29.953 Virtualization Management: Not Supported 00:17:29.953 Doorbell Buffer Config: Not Supported 00:17:29.953 Get LBA Status Capability: Not Supported 00:17:29.953 Command & Feature Lockdown Capability: Not Supported 00:17:29.953 Abort Command Limit: 4 00:17:29.953 Async Event Request Limit: 4 00:17:29.953 Number of Firmware Slots: N/A 00:17:29.953 Firmware Slot 1 Read-Only: N/A 00:17:29.953 Firmware Activation Without Reset: N/A 00:17:29.953 Multiple Update Detection Support: N/A 00:17:29.953 Firmware Update Granularity: No Information Provided 00:17:29.953 Per-Namespace SMART Log: Yes 00:17:29.953 Asymmetric Namespace Access Log Page: Supported 00:17:29.953 ANA Transition Time : 10 sec 00:17:29.953 00:17:29.953 Asymmetric Namespace Access Capabilities 00:17:29.953 ANA Optimized State : Supported 00:17:29.953 ANA Non-Optimized State : Supported 00:17:29.953 ANA Inaccessible State : Supported 00:17:29.953 ANA Persistent Loss State : Supported 00:17:29.953 ANA Change State : Supported 00:17:29.953 ANAGRPID is not changed : No 00:17:29.953 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:29.953 00:17:29.953 ANA Group Identifier Maximum : 128 00:17:29.953 Number of ANA Group Identifiers : 128 00:17:29.953 Max Number of Allowed Namespaces : 1024 00:17:29.953 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:29.953 Command Effects Log Page: Supported 00:17:29.953 Get Log Page Extended Data: Supported 00:17:29.953 Telemetry Log Pages: Not Supported 00:17:29.953 Persistent Event Log Pages: Not Supported 00:17:29.953 Supported Log Pages Log Page: May Support 00:17:29.953 Commands Supported & Effects Log Page: Not Supported 00:17:29.954 Feature Identifiers & Effects Log Page:May Support 00:17:29.954 NVMe-MI Commands & Effects Log Page: May Support 00:17:29.954 Data Area 4 for Telemetry Log: Not Supported 00:17:29.954 Error Log Page Entries Supported: 128 00:17:29.954 Keep Alive: Supported 00:17:29.954 Keep Alive Granularity: 1000 ms 00:17:29.954 00:17:29.954 NVM Command Set Attributes 00:17:29.954 ========================== 00:17:29.954 Submission Queue Entry Size 00:17:29.954 Max: 64 00:17:29.954 Min: 64 00:17:29.954 Completion Queue Entry Size 00:17:29.954 Max: 16 00:17:29.954 Min: 16 00:17:29.954 Number of Namespaces: 1024 00:17:29.954 Compare Command: Not Supported 00:17:29.954 Write Uncorrectable Command: Not Supported 00:17:29.954 Dataset Management Command: Supported 00:17:29.954 Write Zeroes Command: Supported 00:17:29.954 Set Features Save Field: Not Supported 00:17:29.954 Reservations: Not Supported 00:17:29.954 Timestamp: Not Supported 00:17:29.954 Copy: Not Supported 00:17:29.954 Volatile Write Cache: Present 00:17:29.954 Atomic Write Unit (Normal): 1 00:17:29.954 Atomic Write Unit (PFail): 1 00:17:29.954 Atomic Compare & Write Unit: 1 00:17:29.954 Fused Compare & Write: Not Supported 00:17:29.954 Scatter-Gather List 00:17:29.954 SGL Command Set: Supported 00:17:29.954 SGL Keyed: Not Supported 00:17:29.954 SGL Bit Bucket Descriptor: Not Supported 00:17:29.954 SGL Metadata Pointer: Not Supported 00:17:29.954 Oversized SGL: Not Supported 00:17:29.954 SGL Metadata Address: Not Supported 00:17:29.954 SGL Offset: Supported 00:17:29.954 Transport SGL Data Block: Not Supported 00:17:29.954 Replay Protected Memory Block: Not Supported 00:17:29.954 00:17:29.954 Firmware Slot Information 00:17:29.954 ========================= 00:17:29.954 Active slot: 0 00:17:29.954 00:17:29.954 Asymmetric Namespace Access 00:17:29.954 =========================== 00:17:29.954 Change Count : 0 00:17:29.954 Number of ANA Group Descriptors : 1 00:17:29.954 ANA Group Descriptor : 0 00:17:29.954 ANA Group ID : 1 00:17:29.954 Number of NSID Values : 1 00:17:29.954 Change Count : 0 00:17:29.954 ANA State : 1 00:17:29.954 Namespace Identifier : 1 00:17:29.954 00:17:29.954 Commands Supported and Effects 00:17:29.954 ============================== 00:17:29.954 Admin Commands 00:17:29.954 -------------- 00:17:29.954 Get Log Page (02h): Supported 00:17:29.954 Identify (06h): Supported 00:17:29.954 Abort (08h): Supported 00:17:29.954 Set Features (09h): Supported 00:17:29.954 Get Features (0Ah): Supported 00:17:29.954 Asynchronous Event Request (0Ch): Supported 00:17:29.954 Keep Alive (18h): Supported 00:17:29.954 I/O Commands 00:17:29.954 ------------ 00:17:29.954 Flush (00h): Supported 00:17:29.954 Write (01h): Supported LBA-Change 00:17:29.954 Read (02h): Supported 00:17:29.954 Write Zeroes (08h): Supported LBA-Change 00:17:29.954 Dataset Management (09h): Supported 00:17:29.954 00:17:29.954 Error Log 00:17:29.954 ========= 00:17:29.954 Entry: 0 00:17:29.954 Error Count: 0x3 00:17:29.954 Submission Queue Id: 0x0 00:17:29.954 Command Id: 0x5 00:17:29.954 Phase Bit: 0 00:17:29.954 Status Code: 0x2 00:17:29.954 Status Code Type: 0x0 00:17:29.954 Do Not Retry: 1 00:17:29.954 Error Location: 0x28 00:17:29.954 LBA: 0x0 00:17:29.954 Namespace: 0x0 00:17:29.954 Vendor Log Page: 0x0 00:17:29.954 ----------- 00:17:29.954 Entry: 1 00:17:29.954 Error Count: 0x2 00:17:29.954 Submission Queue Id: 0x0 00:17:29.954 Command Id: 0x5 00:17:29.954 Phase Bit: 0 00:17:29.954 Status Code: 0x2 00:17:29.954 Status Code Type: 0x0 00:17:29.954 Do Not Retry: 1 00:17:29.954 Error Location: 0x28 00:17:29.954 LBA: 0x0 00:17:29.954 Namespace: 0x0 00:17:29.954 Vendor Log Page: 0x0 00:17:29.954 ----------- 00:17:29.954 Entry: 2 00:17:29.954 Error Count: 0x1 00:17:29.954 Submission Queue Id: 0x0 00:17:29.954 Command Id: 0x4 00:17:29.954 Phase Bit: 0 00:17:29.954 Status Code: 0x2 00:17:29.954 Status Code Type: 0x0 00:17:29.954 Do Not Retry: 1 00:17:29.954 Error Location: 0x28 00:17:29.954 LBA: 0x0 00:17:29.954 Namespace: 0x0 00:17:29.954 Vendor Log Page: 0x0 00:17:29.954 00:17:29.954 Number of Queues 00:17:29.954 ================ 00:17:29.954 Number of I/O Submission Queues: 128 00:17:29.954 Number of I/O Completion Queues: 128 00:17:29.954 00:17:29.954 ZNS Specific Controller Data 00:17:29.954 ============================ 00:17:29.954 Zone Append Size Limit: 0 00:17:29.954 00:17:29.954 00:17:29.954 Active Namespaces 00:17:29.954 ================= 00:17:29.954 get_feature(0x05) failed 00:17:29.954 Namespace ID:1 00:17:29.954 Command Set Identifier: NVM (00h) 00:17:29.954 Deallocate: Supported 00:17:29.954 Deallocated/Unwritten Error: Not Supported 00:17:29.954 Deallocated Read Value: Unknown 00:17:29.954 Deallocate in Write Zeroes: Not Supported 00:17:29.954 Deallocated Guard Field: 0xFFFF 00:17:29.954 Flush: Supported 00:17:29.954 Reservation: Not Supported 00:17:29.954 Namespace Sharing Capabilities: Multiple Controllers 00:17:29.954 Size (in LBAs): 1310720 (5GiB) 00:17:29.954 Capacity (in LBAs): 1310720 (5GiB) 00:17:29.954 Utilization (in LBAs): 1310720 (5GiB) 00:17:29.954 UUID: ee439a1a-b947-40af-b041-e1da4c44d9da 00:17:29.954 Thin Provisioning: Not Supported 00:17:29.954 Per-NS Atomic Units: Yes 00:17:29.954 Atomic Boundary Size (Normal): 0 00:17:29.954 Atomic Boundary Size (PFail): 0 00:17:29.954 Atomic Boundary Offset: 0 00:17:29.954 NGUID/EUI64 Never Reused: No 00:17:29.954 ANA group ID: 1 00:17:29.954 Namespace Write Protected: No 00:17:29.954 Number of LBA Formats: 1 00:17:29.954 Current LBA Format: LBA Format #00 00:17:29.954 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:29.954 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:29.954 rmmod nvme_tcp 00:17:29.954 rmmod nvme_fabrics 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.954 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.213 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:30.213 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:30.213 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:30.213 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:30.213 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:30.213 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:30.213 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:30.213 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:30.213 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:30.213 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:30.213 17:06:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:30.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:31.037 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:31.037 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:31.037 00:17:31.037 real 0m2.748s 00:17:31.037 user 0m0.959s 00:17:31.037 sys 0m1.281s 00:17:31.037 17:06:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:31.037 17:06:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.037 ************************************ 00:17:31.037 END TEST nvmf_identify_kernel_target 00:17:31.037 ************************************ 00:17:31.037 17:06:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:31.037 17:06:21 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:31.037 17:06:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:31.037 17:06:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:31.037 17:06:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.037 ************************************ 00:17:31.037 START TEST nvmf_auth_host 00:17:31.037 ************************************ 00:17:31.037 17:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:31.037 * Looking for test storage... 00:17:31.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:31.037 17:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:31.037 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:31.296 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:31.297 Cannot find device "nvmf_tgt_br" 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:31.297 Cannot find device "nvmf_tgt_br2" 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:31.297 Cannot find device "nvmf_tgt_br" 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:31.297 Cannot find device "nvmf_tgt_br2" 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:31.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:31.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:31.297 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:31.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:31.556 00:17:31.556 --- 10.0.0.2 ping statistics --- 00:17:31.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.556 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:31.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:31.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:31.556 00:17:31.556 --- 10.0.0.3 ping statistics --- 00:17:31.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.556 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:31.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:17:31.556 00:17:31.556 --- 10.0.0.1 ping statistics --- 00:17:31.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.556 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78499 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78499 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78499 ']' 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.556 17:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.489 17:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.489 17:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:32.489 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:32.489 17:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:32.489 17:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0bf14f3c7755a809a3ae8356251bc3d5 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Geg 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0bf14f3c7755a809a3ae8356251bc3d5 0 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0bf14f3c7755a809a3ae8356251bc3d5 0 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0bf14f3c7755a809a3ae8356251bc3d5 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:32.747 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Geg 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Geg 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Geg 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=90d798f20d49dd65297c693b08a667dd1296e0e9b972f86761d5797bba7742c9 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.u12 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 90d798f20d49dd65297c693b08a667dd1296e0e9b972f86761d5797bba7742c9 3 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 90d798f20d49dd65297c693b08a667dd1296e0e9b972f86761d5797bba7742c9 3 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=90d798f20d49dd65297c693b08a667dd1296e0e9b972f86761d5797bba7742c9 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.u12 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.u12 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.u12 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=144affc78d2a45eedd93fecaea8c7c625373703116885fc4 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Laf 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 144affc78d2a45eedd93fecaea8c7c625373703116885fc4 0 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 144affc78d2a45eedd93fecaea8c7c625373703116885fc4 0 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=144affc78d2a45eedd93fecaea8c7c625373703116885fc4 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:32.748 17:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Laf 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Laf 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Laf 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2cf2b967fb1998feaa2124050cb761993d25ad2da6d1181a 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Po2 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2cf2b967fb1998feaa2124050cb761993d25ad2da6d1181a 2 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2cf2b967fb1998feaa2124050cb761993d25ad2da6d1181a 2 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2cf2b967fb1998feaa2124050cb761993d25ad2da6d1181a 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:32.748 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:33.006 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Po2 00:17:33.006 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Po2 00:17:33.006 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Po2 00:17:33.006 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:33.006 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:33.006 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f502a1bf8d11393a602e6b62c787d6af 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Rt9 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f502a1bf8d11393a602e6b62c787d6af 1 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f502a1bf8d11393a602e6b62c787d6af 1 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f502a1bf8d11393a602e6b62c787d6af 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Rt9 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Rt9 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Rt9 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d09c4c8930ba29466b50a81a561213d0 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uqd 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d09c4c8930ba29466b50a81a561213d0 1 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d09c4c8930ba29466b50a81a561213d0 1 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d09c4c8930ba29466b50a81a561213d0 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uqd 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uqd 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uqd 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8a8e75c72100945110ac77615f634e8e9460303b5e990de0 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ySL 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8a8e75c72100945110ac77615f634e8e9460303b5e990de0 2 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8a8e75c72100945110ac77615f634e8e9460303b5e990de0 2 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8a8e75c72100945110ac77615f634e8e9460303b5e990de0 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ySL 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ySL 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ySL 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ae7045e74576e6f394bf21e85fe07667 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BH2 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ae7045e74576e6f394bf21e85fe07667 0 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ae7045e74576e6f394bf21e85fe07667 0 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ae7045e74576e6f394bf21e85fe07667 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:33.007 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BH2 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BH2 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.BH2 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=11734fbd75539140c8b60a96b566a6d6dd9337cf5ec188b67db80ec6691889b5 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Mp8 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 11734fbd75539140c8b60a96b566a6d6dd9337cf5ec188b67db80ec6691889b5 3 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 11734fbd75539140c8b60a96b566a6d6dd9337cf5ec188b67db80ec6691889b5 3 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=11734fbd75539140c8b60a96b566a6d6dd9337cf5ec188b67db80ec6691889b5 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Mp8 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Mp8 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Mp8 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78499 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78499 ']' 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.265 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Geg 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.u12 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.u12 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Laf 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Po2 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Po2 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Rt9 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uqd ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uqd 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ySL 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.BH2 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.BH2 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Mp8 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:33.523 17:06:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:33.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:34.039 Waiting for block devices as requested 00:17:34.039 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:34.039 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:34.604 No valid GPT data, bailing 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:34.604 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:34.863 No valid GPT data, bailing 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:34.863 No valid GPT data, bailing 00:17:34.863 17:06:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:34.863 No valid GPT data, bailing 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid=0b4e8503-7bac-4879-926a-209303c4b3da -a 10.0.0.1 -t tcp -s 4420 00:17:34.863 00:17:34.863 Discovery Log Number of Records 2, Generation counter 2 00:17:34.863 =====Discovery Log Entry 0====== 00:17:34.863 trtype: tcp 00:17:34.863 adrfam: ipv4 00:17:34.863 subtype: current discovery subsystem 00:17:34.863 treq: not specified, sq flow control disable supported 00:17:34.863 portid: 1 00:17:34.863 trsvcid: 4420 00:17:34.863 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:34.863 traddr: 10.0.0.1 00:17:34.863 eflags: none 00:17:34.863 sectype: none 00:17:34.863 =====Discovery Log Entry 1====== 00:17:34.863 trtype: tcp 00:17:34.863 adrfam: ipv4 00:17:34.863 subtype: nvme subsystem 00:17:34.863 treq: not specified, sq flow control disable supported 00:17:34.863 portid: 1 00:17:34.863 trsvcid: 4420 00:17:34.863 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:34.863 traddr: 10.0.0.1 00:17:34.863 eflags: none 00:17:34.863 sectype: none 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.863 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.122 nvme0n1 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.122 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.382 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.382 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.382 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.382 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.382 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.382 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.383 nvme0n1 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.383 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.644 nvme0n1 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.644 nvme0n1 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.644 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:35.903 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.904 17:06:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.904 nvme0n1 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.904 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.163 nvme0n1 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.163 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.421 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.422 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.422 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.422 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.422 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.422 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.422 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.422 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.422 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.422 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.422 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.680 nvme0n1 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.680 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.681 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.681 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 nvme0n1 00:17:36.681 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.681 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.681 17:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.681 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.681 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 17:06:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.939 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.940 nvme0n1 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.940 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.198 nvme0n1 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.198 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.199 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.457 nvme0n1 00:17:37.457 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.457 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.457 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.457 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.457 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.457 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.457 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.458 17:06:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.024 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.282 nvme0n1 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.282 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.539 nvme0n1 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.539 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.797 nvme0n1 00:17:38.797 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.797 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.797 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.797 17:06:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.797 17:06:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:38.797 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.798 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.056 nvme0n1 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.056 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.314 nvme0n1 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:39.314 17:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.266 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.267 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.526 nvme0n1 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.526 17:06:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.096 nvme0n1 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:42.096 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.097 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.355 nvme0n1 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.355 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.929 nvme0n1 00:17:42.929 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.929 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.929 17:06:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.929 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.929 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.929 17:06:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.929 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.930 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.930 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.930 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:42.930 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.930 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.187 nvme0n1 00:17:43.187 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.187 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.187 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.187 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.187 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.187 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.187 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.187 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.187 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.187 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:43.188 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.446 17:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.013 nvme0n1 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.013 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 nvme0n1 00:17:44.579 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.580 17:06:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.838 17:06:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.838 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.838 17:06:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.406 nvme0n1 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.406 17:06:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.973 nvme0n1 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.973 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.576 nvme0n1 00:17:46.576 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.576 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.576 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.576 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.576 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.576 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:46.835 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.836 17:06:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.836 nvme0n1 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.836 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.095 nvme0n1 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.095 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.354 nvme0n1 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.354 nvme0n1 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.354 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.612 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.612 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.612 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:47.612 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.613 nvme0n1 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.613 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.872 nvme0n1 00:17:47.872 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.872 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.872 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.872 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.872 17:06:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.872 17:06:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.872 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.130 nvme0n1 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.130 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.131 nvme0n1 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.131 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.389 nvme0n1 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:48.389 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.390 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.648 nvme0n1 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.648 17:06:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.906 nvme0n1 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.906 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.907 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.907 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.907 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.165 nvme0n1 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.165 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.166 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.425 nvme0n1 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.425 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.684 nvme0n1 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.684 17:06:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.942 nvme0n1 00:17:49.942 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.942 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.943 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.510 nvme0n1 00:17:50.510 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.511 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.770 nvme0n1 00:17:50.770 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.770 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.770 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.770 17:06:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.770 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.770 17:06:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.770 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.337 nvme0n1 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.337 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.596 nvme0n1 00:17:51.596 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.596 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.596 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.596 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.596 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.596 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.596 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.596 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.596 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.596 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.854 17:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.113 nvme0n1 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.113 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.729 nvme0n1 00:17:52.729 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.729 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.729 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.729 17:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.729 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.729 17:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.729 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.988 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.554 nvme0n1 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.554 17:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.119 nvme0n1 00:17:54.119 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.119 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.119 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.119 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.119 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.119 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.119 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.119 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.119 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.119 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.376 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.377 17:06:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.943 nvme0n1 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:54.943 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.944 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.510 nvme0n1 00:17:55.510 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.510 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.510 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.510 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.510 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.510 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.769 nvme0n1 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.769 17:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.769 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.770 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.029 nvme0n1 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.029 nvme0n1 00:17:56.029 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.306 nvme0n1 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.306 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.307 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.307 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.307 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:56.307 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.307 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.585 nvme0n1 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.585 nvme0n1 00:17:56.585 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.844 17:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.844 nvme0n1 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.844 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:56.845 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:56.845 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:57.103 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:57.103 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.104 nvme0n1 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.104 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.363 nvme0n1 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.363 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.621 nvme0n1 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.621 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.622 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.622 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.622 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.622 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.622 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.622 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.622 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.622 17:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.622 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.622 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.622 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.880 nvme0n1 00:17:57.880 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.880 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.880 17:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.880 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.880 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.880 17:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.880 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.139 nvme0n1 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:58.139 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:58.140 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:58.140 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.140 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.140 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:58.140 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.140 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:58.140 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:58.140 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:58.140 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.140 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.140 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.398 nvme0n1 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.398 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.656 nvme0n1 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.656 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.657 17:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.915 nvme0n1 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.915 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.478 nvme0n1 00:17:59.478 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.478 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.478 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.478 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.478 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.478 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.478 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.479 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.759 nvme0n1 00:17:59.759 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.759 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.759 17:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.759 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.759 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.759 17:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.759 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.324 nvme0n1 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:00.324 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:00.325 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.325 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.325 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:00.325 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.325 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:00.325 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:00.325 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:00.325 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:00.325 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.325 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.583 nvme0n1 00:18:00.583 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.583 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.583 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.583 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.583 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.583 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.583 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.583 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.583 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.583 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.839 17:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.096 nvme0n1 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJmMTRmM2M3NzU1YTgwOWEzYWU4MzU2MjUxYmMzZDULvU7a: 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: ]] 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTBkNzk4ZjIwZDQ5ZGQ2NTI5N2M2OTNiMDhhNjY3ZGQxMjk2ZTBlOWI5NzJmODY3NjFkNTc5N2JiYTc3NDJjOZIIbO0=: 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.096 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.661 nvme0n1 00:18:01.661 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.661 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.661 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.661 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.661 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.661 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.919 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.919 17:06:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.919 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.919 17:06:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.919 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.487 nvme0n1 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjUwMmExYmY4ZDExMzkzYTYwMmU2YjYyYzc4N2Q2YWas+WD7: 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: ]] 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA5YzRjODkzMGJhMjk0NjZiNTBhODFhNTYxMjEzZDDwsH1o: 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.487 17:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.055 nvme0n1 00:18:03.055 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGE4ZTc1YzcyMTAwOTQ1MTEwYWM3NzYxNWY2MzRlOGU5NDYwMzAzYjVlOTkwZGUwZOK5Ig==: 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: ]] 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWU3MDQ1ZTc0NTc2ZTZmMzk0YmYyMWU4NWZlMDc2Njezi5ED: 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.313 17:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.880 nvme0n1 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE3MzRmYmQ3NTUzOTE0MGM4YjYwYTk2YjU2NmE2ZDZkZDkzMzdjZjVlYzE4OGI2N2RiODBlYzY2OTE4ODliNegR4ac=: 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.881 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.448 nvme0n1 00:18:04.448 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.448 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.448 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.448 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.448 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.448 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.706 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.706 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.706 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.706 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.706 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.706 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:04.706 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQ0YWZmYzc4ZDJhNDVlZWRkOTNmZWNhZWE4YzdjNjI1MzczNzAzMTE2ODg1ZmM0fPXbvA==: 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNmMmI5NjdmYjE5OThmZWFhMjEyNDA1MGNiNzYxOTkzZDI1YWQyZGE2ZDExODFhMmfSfg==: 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 request: 00:18:04.707 { 00:18:04.707 "name": "nvme0", 00:18:04.707 "trtype": "tcp", 00:18:04.707 "traddr": "10.0.0.1", 00:18:04.707 "adrfam": "ipv4", 00:18:04.707 "trsvcid": "4420", 00:18:04.707 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:04.707 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:04.707 "prchk_reftag": false, 00:18:04.707 "prchk_guard": false, 00:18:04.707 "hdgst": false, 00:18:04.707 "ddgst": false, 00:18:04.707 "method": "bdev_nvme_attach_controller", 00:18:04.707 "req_id": 1 00:18:04.707 } 00:18:04.707 Got JSON-RPC error response 00:18:04.707 response: 00:18:04.707 { 00:18:04.707 "code": -5, 00:18:04.707 "message": "Input/output error" 00:18:04.707 } 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 request: 00:18:04.707 { 00:18:04.707 "name": "nvme0", 00:18:04.707 "trtype": "tcp", 00:18:04.707 "traddr": "10.0.0.1", 00:18:04.707 "adrfam": "ipv4", 00:18:04.707 "trsvcid": "4420", 00:18:04.707 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:04.707 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:04.707 "prchk_reftag": false, 00:18:04.707 "prchk_guard": false, 00:18:04.707 "hdgst": false, 00:18:04.707 "ddgst": false, 00:18:04.707 "dhchap_key": "key2", 00:18:04.707 "method": "bdev_nvme_attach_controller", 00:18:04.707 "req_id": 1 00:18:04.707 } 00:18:04.707 Got JSON-RPC error response 00:18:04.707 response: 00:18:04.707 { 00:18:04.707 "code": -5, 00:18:04.707 "message": "Input/output error" 00:18:04.707 } 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.707 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.708 17:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.966 request: 00:18:04.966 { 00:18:04.966 "name": "nvme0", 00:18:04.966 "trtype": "tcp", 00:18:04.966 "traddr": "10.0.0.1", 00:18:04.966 "adrfam": "ipv4", 00:18:04.966 "trsvcid": "4420", 00:18:04.966 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:04.966 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:04.966 "prchk_reftag": false, 00:18:04.966 "prchk_guard": false, 00:18:04.966 "hdgst": false, 00:18:04.966 "ddgst": false, 00:18:04.966 "dhchap_key": "key1", 00:18:04.966 "dhchap_ctrlr_key": "ckey2", 00:18:04.966 "method": "bdev_nvme_attach_controller", 00:18:04.966 "req_id": 1 00:18:04.966 } 00:18:04.966 Got JSON-RPC error response 00:18:04.966 response: 00:18:04.966 { 00:18:04.966 "code": -5, 00:18:04.966 "message": "Input/output error" 00:18:04.966 } 00:18:04.966 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:04.966 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:04.966 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.966 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.966 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.966 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:18:04.966 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.967 rmmod nvme_tcp 00:18:04.967 rmmod nvme_fabrics 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78499 ']' 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78499 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78499 ']' 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78499 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78499 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78499' 00:18:04.967 killing process with pid 78499 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78499 00:18:04.967 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78499 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:05.225 17:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:05.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:06.049 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:06.049 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:06.050 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Geg /tmp/spdk.key-null.Laf /tmp/spdk.key-sha256.Rt9 /tmp/spdk.key-sha384.ySL /tmp/spdk.key-sha512.Mp8 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:06.050 17:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:06.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:06.663 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:06.663 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:06.663 00:18:06.663 real 0m35.431s 00:18:06.663 user 0m31.703s 00:18:06.663 sys 0m3.577s 00:18:06.663 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:06.663 ************************************ 00:18:06.663 END TEST nvmf_auth_host 00:18:06.663 ************************************ 00:18:06.663 17:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.663 17:06:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:06.663 17:06:56 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:18:06.663 17:06:56 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:06.663 17:06:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:06.663 17:06:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:06.663 17:06:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:06.663 ************************************ 00:18:06.663 START TEST nvmf_digest 00:18:06.663 ************************************ 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:06.663 * Looking for test storage... 00:18:06.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.663 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:06.664 Cannot find device "nvmf_tgt_br" 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:06.664 Cannot find device "nvmf_tgt_br2" 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:06.664 Cannot find device "nvmf_tgt_br" 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:06.664 Cannot find device "nvmf_tgt_br2" 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:18:06.664 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:06.937 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:06.937 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:06.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.937 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:06.937 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:06.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.937 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:06.937 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:06.937 17:06:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:06.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:18:06.937 00:18:06.937 --- 10.0.0.2 ping statistics --- 00:18:06.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.937 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:06.937 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:06.937 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:06.937 00:18:06.937 --- 10.0.0.3 ping statistics --- 00:18:06.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.937 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:06.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:06.937 00:18:06.937 --- 10.0.0.1 ping statistics --- 00:18:06.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.937 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:06.937 ************************************ 00:18:06.937 START TEST nvmf_digest_clean 00:18:06.937 ************************************ 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80069 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80069 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80069 ']' 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.937 17:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:07.196 [2024-07-15 17:06:57.259336] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:07.196 [2024-07-15 17:06:57.259443] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.196 [2024-07-15 17:06:57.395468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.454 [2024-07-15 17:06:57.525724] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.454 [2024-07-15 17:06:57.525788] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.454 [2024-07-15 17:06:57.525804] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.454 [2024-07-15 17:06:57.525815] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.454 [2024-07-15 17:06:57.525824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.454 [2024-07-15 17:06:57.525853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.021 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.021 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:08.021 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:08.021 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.021 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:08.021 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.021 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:08.021 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:08.021 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:08.021 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.021 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:08.280 [2024-07-15 17:06:58.363351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:08.280 null0 00:18:08.280 [2024-07-15 17:06:58.416515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.280 [2024-07-15 17:06:58.440627] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80101 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80101 /var/tmp/bperf.sock 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80101 ']' 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.280 17:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:08.280 [2024-07-15 17:06:58.502870] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:08.280 [2024-07-15 17:06:58.502998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80101 ] 00:18:08.539 [2024-07-15 17:06:58.643006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.539 [2024-07-15 17:06:58.756672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.475 17:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.475 17:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:09.475 17:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:09.475 17:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:09.475 17:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:09.733 [2024-07-15 17:06:59.892273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:09.734 17:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:09.734 17:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:10.300 nvme0n1 00:18:10.300 17:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:10.300 17:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:10.300 Running I/O for 2 seconds... 00:18:12.203 00:18:12.203 Latency(us) 00:18:12.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.203 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:12.203 nvme0n1 : 2.01 14632.33 57.16 0.00 0.00 8740.07 7864.32 21448.15 00:18:12.203 =================================================================================================================== 00:18:12.203 Total : 14632.33 57.16 0.00 0.00 8740.07 7864.32 21448.15 00:18:12.203 0 00:18:12.203 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:12.203 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:12.203 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:12.203 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:12.203 | select(.opcode=="crc32c") 00:18:12.203 | "\(.module_name) \(.executed)"' 00:18:12.203 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80101 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80101 ']' 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80101 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80101 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:12.463 killing process with pid 80101 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80101' 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80101 00:18:12.463 Received shutdown signal, test time was about 2.000000 seconds 00:18:12.463 00:18:12.463 Latency(us) 00:18:12.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.463 =================================================================================================================== 00:18:12.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.463 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80101 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80167 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80167 /var/tmp/bperf.sock 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80167 ']' 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.723 17:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:12.723 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:12.723 Zero copy mechanism will not be used. 00:18:12.723 [2024-07-15 17:07:02.993770] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:12.723 [2024-07-15 17:07:02.993849] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80167 ] 00:18:12.983 [2024-07-15 17:07:03.124902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.983 [2024-07-15 17:07:03.241141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.919 17:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.919 17:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:13.919 17:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:13.919 17:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:13.919 17:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:14.181 [2024-07-15 17:07:04.324722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:14.181 17:07:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:14.181 17:07:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:14.439 nvme0n1 00:18:14.439 17:07:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:14.439 17:07:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:14.696 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:14.696 Zero copy mechanism will not be used. 00:18:14.696 Running I/O for 2 seconds... 00:18:16.599 00:18:16.599 Latency(us) 00:18:16.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.599 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:16.599 nvme0n1 : 2.00 7527.53 940.94 0.00 0.00 2121.97 1951.19 3932.16 00:18:16.599 =================================================================================================================== 00:18:16.599 Total : 7527.53 940.94 0.00 0.00 2121.97 1951.19 3932.16 00:18:16.599 0 00:18:16.599 17:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:16.599 17:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:16.599 17:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:16.599 17:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:16.599 17:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:16.599 | select(.opcode=="crc32c") 00:18:16.599 | "\(.module_name) \(.executed)"' 00:18:16.858 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:16.858 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:16.858 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:16.858 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:16.859 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80167 00:18:16.859 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80167 ']' 00:18:16.859 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80167 00:18:16.859 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:16.859 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.859 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80167 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:17.121 killing process with pid 80167 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80167' 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80167 00:18:17.121 Received shutdown signal, test time was about 2.000000 seconds 00:18:17.121 00:18:17.121 Latency(us) 00:18:17.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.121 =================================================================================================================== 00:18:17.121 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80167 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80226 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80226 /var/tmp/bperf.sock 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80226 ']' 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.121 17:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:17.381 [2024-07-15 17:07:07.450486] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:17.381 [2024-07-15 17:07:07.450578] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80226 ] 00:18:17.381 [2024-07-15 17:07:07.589978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.640 [2024-07-15 17:07:07.706855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.207 17:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.207 17:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:18.207 17:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:18.207 17:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:18.207 17:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:18.466 [2024-07-15 17:07:08.757018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:18.724 17:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:18.724 17:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:18.983 nvme0n1 00:18:18.983 17:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:18.983 17:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:18.983 Running I/O for 2 seconds... 00:18:21.516 00:18:21.516 Latency(us) 00:18:21.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.516 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.516 nvme0n1 : 2.01 15935.64 62.25 0.00 0.00 8024.72 2323.55 15132.86 00:18:21.516 =================================================================================================================== 00:18:21.516 Total : 15935.64 62.25 0.00 0.00 8024.72 2323.55 15132.86 00:18:21.516 0 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:21.516 | select(.opcode=="crc32c") 00:18:21.516 | "\(.module_name) \(.executed)"' 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80226 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80226 ']' 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80226 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80226 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:21.516 killing process with pid 80226 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80226' 00:18:21.516 Received shutdown signal, test time was about 2.000000 seconds 00:18:21.516 00:18:21.516 Latency(us) 00:18:21.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.516 =================================================================================================================== 00:18:21.516 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80226 00:18:21.516 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80226 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80288 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80288 /var/tmp/bperf.sock 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80288 ']' 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:21.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:21.775 17:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:21.775 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:21.775 Zero copy mechanism will not be used. 00:18:21.775 [2024-07-15 17:07:11.885420] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:21.775 [2024-07-15 17:07:11.885505] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80288 ] 00:18:21.775 [2024-07-15 17:07:12.024938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.032 [2024-07-15 17:07:12.140807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.597 17:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.597 17:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:22.854 17:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:22.854 17:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:22.854 17:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:23.113 [2024-07-15 17:07:13.200414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:23.113 17:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:23.113 17:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:23.371 nvme0n1 00:18:23.371 17:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:23.371 17:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:23.629 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:23.629 Zero copy mechanism will not be used. 00:18:23.629 Running I/O for 2 seconds... 00:18:25.601 00:18:25.601 Latency(us) 00:18:25.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.601 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:25.601 nvme0n1 : 2.00 5946.02 743.25 0.00 0.00 2684.76 1995.87 8757.99 00:18:25.601 =================================================================================================================== 00:18:25.601 Total : 5946.02 743.25 0.00 0.00 2684.76 1995.87 8757.99 00:18:25.601 0 00:18:25.601 17:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:25.601 17:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:25.601 17:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:25.601 17:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:25.601 17:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:25.601 | select(.opcode=="crc32c") 00:18:25.601 | "\(.module_name) \(.executed)"' 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80288 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80288 ']' 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80288 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80288 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:25.859 killing process with pid 80288 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80288' 00:18:25.859 Received shutdown signal, test time was about 2.000000 seconds 00:18:25.859 00:18:25.859 Latency(us) 00:18:25.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.859 =================================================================================================================== 00:18:25.859 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80288 00:18:25.859 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80288 00:18:26.117 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80069 00:18:26.117 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80069 ']' 00:18:26.117 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80069 00:18:26.117 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:26.117 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.117 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80069 00:18:26.117 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:26.117 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:26.117 killing process with pid 80069 00:18:26.117 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80069' 00:18:26.117 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80069 00:18:26.117 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80069 00:18:26.375 00:18:26.375 real 0m19.346s 00:18:26.375 user 0m37.930s 00:18:26.375 sys 0m4.804s 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:26.375 ************************************ 00:18:26.375 END TEST nvmf_digest_clean 00:18:26.375 ************************************ 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:26.375 ************************************ 00:18:26.375 START TEST nvmf_digest_error 00:18:26.375 ************************************ 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80371 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80371 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80371 ']' 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.375 17:07:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.375 [2024-07-15 17:07:16.668162] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:26.375 [2024-07-15 17:07:16.668249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.633 [2024-07-15 17:07:16.803307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.633 [2024-07-15 17:07:16.918997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.633 [2024-07-15 17:07:16.919268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.633 [2024-07-15 17:07:16.919348] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.633 [2024-07-15 17:07:16.919458] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.633 [2024-07-15 17:07:16.919542] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.633 [2024-07-15 17:07:16.919633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:27.588 [2024-07-15 17:07:17.700193] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:27.588 [2024-07-15 17:07:17.761445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:27.588 null0 00:18:27.588 [2024-07-15 17:07:17.811806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.588 [2024-07-15 17:07:17.835956] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80403 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80403 /var/tmp/bperf.sock 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80403 ']' 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:27.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.588 17:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:27.846 [2024-07-15 17:07:17.894062] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:27.846 [2024-07-15 17:07:17.894172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80403 ] 00:18:27.846 [2024-07-15 17:07:18.038101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.104 [2024-07-15 17:07:18.173515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.104 [2024-07-15 17:07:18.230968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:28.671 17:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.671 17:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:28.671 17:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:28.671 17:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:28.930 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:28.930 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.930 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:28.930 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.930 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:28.930 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:29.189 nvme0n1 00:18:29.447 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:29.447 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.447 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:29.447 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.447 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:29.447 17:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:29.447 Running I/O for 2 seconds... 00:18:29.447 [2024-07-15 17:07:19.648301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.447 [2024-07-15 17:07:19.648378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.447 [2024-07-15 17:07:19.648397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.447 [2024-07-15 17:07:19.666063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.447 [2024-07-15 17:07:19.666107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.447 [2024-07-15 17:07:19.666121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.447 [2024-07-15 17:07:19.683602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.447 [2024-07-15 17:07:19.683642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.447 [2024-07-15 17:07:19.683657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.447 [2024-07-15 17:07:19.701253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.447 [2024-07-15 17:07:19.701295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.447 [2024-07-15 17:07:19.701309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.447 [2024-07-15 17:07:19.718695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.447 [2024-07-15 17:07:19.718736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.447 [2024-07-15 17:07:19.718751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.447 [2024-07-15 17:07:19.736216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.447 [2024-07-15 17:07:19.736261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.447 [2024-07-15 17:07:19.736276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.706 [2024-07-15 17:07:19.753610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.706 [2024-07-15 17:07:19.753653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.706 [2024-07-15 17:07:19.753668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.706 [2024-07-15 17:07:19.771265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.706 [2024-07-15 17:07:19.771308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.706 [2024-07-15 17:07:19.771323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.706 [2024-07-15 17:07:19.788605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.706 [2024-07-15 17:07:19.788645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.706 [2024-07-15 17:07:19.788659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.706 [2024-07-15 17:07:19.806134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.706 [2024-07-15 17:07:19.806181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.706 [2024-07-15 17:07:19.806196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.706 [2024-07-15 17:07:19.823638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.706 [2024-07-15 17:07:19.823679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.706 [2024-07-15 17:07:19.823694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.707 [2024-07-15 17:07:19.841036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.707 [2024-07-15 17:07:19.841075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.707 [2024-07-15 17:07:19.841090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.707 [2024-07-15 17:07:19.858526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.707 [2024-07-15 17:07:19.858564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.707 [2024-07-15 17:07:19.858578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.707 [2024-07-15 17:07:19.875952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.707 [2024-07-15 17:07:19.875989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.707 [2024-07-15 17:07:19.876004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.707 [2024-07-15 17:07:19.894093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.707 [2024-07-15 17:07:19.894131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.707 [2024-07-15 17:07:19.894145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.707 [2024-07-15 17:07:19.911402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.707 [2024-07-15 17:07:19.911437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.707 [2024-07-15 17:07:19.911451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.707 [2024-07-15 17:07:19.929018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.707 [2024-07-15 17:07:19.929056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.707 [2024-07-15 17:07:19.929070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.707 [2024-07-15 17:07:19.946428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.707 [2024-07-15 17:07:19.946468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.707 [2024-07-15 17:07:19.946482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.707 [2024-07-15 17:07:19.963947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.707 [2024-07-15 17:07:19.963990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.707 [2024-07-15 17:07:19.964004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.707 [2024-07-15 17:07:19.981326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.707 [2024-07-15 17:07:19.981378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.707 [2024-07-15 17:07:19.981394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.707 [2024-07-15 17:07:19.998805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.707 [2024-07-15 17:07:19.998846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.707 [2024-07-15 17:07:19.998861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.016296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.016343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.016374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.033636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.033680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.033695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.050897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.050937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.050954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.068416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.068453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.068468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.085929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.085966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.085982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.103438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.103475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.103489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.120980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.121017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.121031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.138243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.138283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.138296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.155712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.155749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.155763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.172989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.173037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.173051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.190247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.190283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.190297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.207424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.207460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.207474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.224643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.224684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.224699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.241891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.241950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.241965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.966 [2024-07-15 17:07:20.259323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:29.966 [2024-07-15 17:07:20.259390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.966 [2024-07-15 17:07:20.259416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.276813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.276869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.276884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.294162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.294212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.294227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.311629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.311670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.311685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.328936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.328983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.328999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.346116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.346156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.346171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.363323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.363377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.363392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.380553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.380591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.380612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.397838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.397878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.397892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.415284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.415351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.415366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.432601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.432636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.432650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.449956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.449991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.450005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.467453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.467487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.467511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.484651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.484687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.484700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.502233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.502273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.502287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.226 [2024-07-15 17:07:20.520285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.226 [2024-07-15 17:07:20.520350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.226 [2024-07-15 17:07:20.520364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.485 [2024-07-15 17:07:20.538204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.485 [2024-07-15 17:07:20.538239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.485 [2024-07-15 17:07:20.538253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.485 [2024-07-15 17:07:20.555829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.485 [2024-07-15 17:07:20.555871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.485 [2024-07-15 17:07:20.555895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.485 [2024-07-15 17:07:20.573499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.485 [2024-07-15 17:07:20.573542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.485 [2024-07-15 17:07:20.573557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.485 [2024-07-15 17:07:20.591075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.485 [2024-07-15 17:07:20.591120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.486 [2024-07-15 17:07:20.591134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.486 [2024-07-15 17:07:20.608688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.486 [2024-07-15 17:07:20.608737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.486 [2024-07-15 17:07:20.608752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.486 [2024-07-15 17:07:20.626332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.486 [2024-07-15 17:07:20.626388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.486 [2024-07-15 17:07:20.626404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.486 [2024-07-15 17:07:20.643963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.486 [2024-07-15 17:07:20.644009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.486 [2024-07-15 17:07:20.644023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.486 [2024-07-15 17:07:20.661497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.486 [2024-07-15 17:07:20.661540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.486 [2024-07-15 17:07:20.661555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.486 [2024-07-15 17:07:20.678972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.486 [2024-07-15 17:07:20.679013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.486 [2024-07-15 17:07:20.679027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.486 [2024-07-15 17:07:20.696294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.486 [2024-07-15 17:07:20.696337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.486 [2024-07-15 17:07:20.696351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.486 [2024-07-15 17:07:20.713689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.486 [2024-07-15 17:07:20.713726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.486 [2024-07-15 17:07:20.713740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.486 [2024-07-15 17:07:20.730999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.486 [2024-07-15 17:07:20.731035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.486 [2024-07-15 17:07:20.731049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.486 [2024-07-15 17:07:20.755879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.486 [2024-07-15 17:07:20.755920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.486 [2024-07-15 17:07:20.755934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.486 [2024-07-15 17:07:20.773205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.486 [2024-07-15 17:07:20.773243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.486 [2024-07-15 17:07:20.773258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.790654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.790698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.790713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.808033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.808078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.808092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.825415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.825454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.825468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.842758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.842797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.842812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.860155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.860197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.860211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.877594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.877639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.877654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.895289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.895344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.895369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.913027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.913095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.913111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.930696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.930740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.930755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.948412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.948458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.948472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.965921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.965969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.965984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:20.983525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:20.983573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:20.983587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:21.001087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:21.001134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:21.001150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:21.018649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:21.018698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:21.018712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.745 [2024-07-15 17:07:21.036206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:30.745 [2024-07-15 17:07:21.036258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.745 [2024-07-15 17:07:21.036273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.004 [2024-07-15 17:07:21.054115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.004 [2024-07-15 17:07:21.054170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.004 [2024-07-15 17:07:21.054184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.004 [2024-07-15 17:07:21.071715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.004 [2024-07-15 17:07:21.071766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.004 [2024-07-15 17:07:21.071782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.004 [2024-07-15 17:07:21.089285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.004 [2024-07-15 17:07:21.089335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.004 [2024-07-15 17:07:21.089349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.004 [2024-07-15 17:07:21.106871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.004 [2024-07-15 17:07:21.106920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.004 [2024-07-15 17:07:21.106935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.004 [2024-07-15 17:07:21.124702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.005 [2024-07-15 17:07:21.124760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.005 [2024-07-15 17:07:21.124774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.005 [2024-07-15 17:07:21.142458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.005 [2024-07-15 17:07:21.142506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.005 [2024-07-15 17:07:21.142520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.005 [2024-07-15 17:07:21.159996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.005 [2024-07-15 17:07:21.160047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.005 [2024-07-15 17:07:21.160062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.005 [2024-07-15 17:07:21.177591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.005 [2024-07-15 17:07:21.177639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.005 [2024-07-15 17:07:21.177653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.005 [2024-07-15 17:07:21.195548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.005 [2024-07-15 17:07:21.195606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.005 [2024-07-15 17:07:21.195622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.005 [2024-07-15 17:07:21.213195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.005 [2024-07-15 17:07:21.213252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.005 [2024-07-15 17:07:21.213267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.005 [2024-07-15 17:07:21.230839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.005 [2024-07-15 17:07:21.230888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.005 [2024-07-15 17:07:21.230905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.005 [2024-07-15 17:07:21.248478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.005 [2024-07-15 17:07:21.248526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.005 [2024-07-15 17:07:21.248541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.005 [2024-07-15 17:07:21.266028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.005 [2024-07-15 17:07:21.266076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.005 [2024-07-15 17:07:21.266092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.005 [2024-07-15 17:07:21.283625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.005 [2024-07-15 17:07:21.283678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.005 [2024-07-15 17:07:21.283693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.005 [2024-07-15 17:07:21.301352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.005 [2024-07-15 17:07:21.301410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.005 [2024-07-15 17:07:21.301425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.318819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.318865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.318879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.336521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.336573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.336589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.354219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.354267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.354282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.371903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.371961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.371975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.389614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.389666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.389688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.407255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.407303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.407317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.424769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.424837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.424853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.442409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.442456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.442471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.460021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.460065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.460080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.477447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.477483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.477497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.494715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.494750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.494764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.511968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.512009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.512023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.264 [2024-07-15 17:07:21.529250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.264 [2024-07-15 17:07:21.529320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.264 [2024-07-15 17:07:21.529336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.265 [2024-07-15 17:07:21.546616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.265 [2024-07-15 17:07:21.546652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.265 [2024-07-15 17:07:21.546665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.523 [2024-07-15 17:07:21.564107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.523 [2024-07-15 17:07:21.564142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.523 [2024-07-15 17:07:21.564156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.523 [2024-07-15 17:07:21.581575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.523 [2024-07-15 17:07:21.581610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.523 [2024-07-15 17:07:21.581624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.523 [2024-07-15 17:07:21.598855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.523 [2024-07-15 17:07:21.598903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.523 [2024-07-15 17:07:21.598916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.523 [2024-07-15 17:07:21.615855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe4020) 00:18:31.523 [2024-07-15 17:07:21.615905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.523 [2024-07-15 17:07:21.615919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.523 00:18:31.523 Latency(us) 00:18:31.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.523 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:31.523 nvme0n1 : 2.01 14439.73 56.41 0.00 0.00 8858.18 7923.90 33602.09 00:18:31.523 =================================================================================================================== 00:18:31.523 Total : 14439.73 56.41 0.00 0.00 8858.18 7923.90 33602.09 00:18:31.523 0 00:18:31.523 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:31.523 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:31.523 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:31.523 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:31.523 | .driver_specific 00:18:31.523 | .nvme_error 00:18:31.523 | .status_code 00:18:31.523 | .command_transient_transport_error' 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 113 > 0 )) 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80403 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80403 ']' 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80403 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80403 00:18:31.782 killing process with pid 80403 00:18:31.782 Received shutdown signal, test time was about 2.000000 seconds 00:18:31.782 00:18:31.782 Latency(us) 00:18:31.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.782 =================================================================================================================== 00:18:31.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80403' 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80403 00:18:31.782 17:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80403 00:18:32.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80469 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80469 /var/tmp/bperf.sock 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80469 ']' 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.039 17:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.039 [2024-07-15 17:07:22.275747] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:32.039 [2024-07-15 17:07:22.275840] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80469 ] 00:18:32.039 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:32.039 Zero copy mechanism will not be used. 00:18:32.296 [2024-07-15 17:07:22.417459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.296 [2024-07-15 17:07:22.543468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.554 [2024-07-15 17:07:22.599899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:33.120 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.120 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:33.120 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:33.120 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:33.382 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:33.382 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.382 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:33.382 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.382 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:33.382 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:33.640 nvme0n1 00:18:33.640 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:33.640 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.640 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:33.640 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.640 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:33.640 17:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:33.899 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:33.899 Zero copy mechanism will not be used. 00:18:33.899 Running I/O for 2 seconds... 00:18:33.899 [2024-07-15 17:07:24.057682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.899 [2024-07-15 17:07:24.057762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.899 [2024-07-15 17:07:24.057794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.899 [2024-07-15 17:07:24.062240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.899 [2024-07-15 17:07:24.062294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.899 [2024-07-15 17:07:24.062307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.899 [2024-07-15 17:07:24.066638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.899 [2024-07-15 17:07:24.066675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.899 [2024-07-15 17:07:24.066689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.899 [2024-07-15 17:07:24.071249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.899 [2024-07-15 17:07:24.071286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.899 [2024-07-15 17:07:24.071299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.899 [2024-07-15 17:07:24.075861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.899 [2024-07-15 17:07:24.075898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.899 [2024-07-15 17:07:24.075911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.899 [2024-07-15 17:07:24.080280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.899 [2024-07-15 17:07:24.080330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.899 [2024-07-15 17:07:24.080343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.899 [2024-07-15 17:07:24.084677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.899 [2024-07-15 17:07:24.084729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.899 [2024-07-15 17:07:24.084757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.899 [2024-07-15 17:07:24.089163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.899 [2024-07-15 17:07:24.089214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.899 [2024-07-15 17:07:24.089227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.899 [2024-07-15 17:07:24.093728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.093780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.093807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.098128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.098179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.098192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.102558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.102607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.102620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.106987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.107036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.107049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.111311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.111362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.111386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.115950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.115985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.115997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.120562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.120597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.120610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.125248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.125316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.125329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.129910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.129959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.129972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.134474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.134523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.134536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.138985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.139033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.139046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.143491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.143551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.143564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.148031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.148080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.148108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.152492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.152526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.152538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.156971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.157021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.157034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.161494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.161528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.161541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.166061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.166097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.166111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.170462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.170512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.170525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.174957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.175008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.175020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.179598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.179632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.179646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.184095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.184130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.184142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.188551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.188601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.188614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.900 [2024-07-15 17:07:24.193169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:33.900 [2024-07-15 17:07:24.193205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.900 [2024-07-15 17:07:24.193219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.197840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.197906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.197937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.202553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.202588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.202601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.207011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.207047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.207060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.211708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.211744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.211757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.216223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.216273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.216285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.220751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.220801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.220814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.225305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.225355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.225394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.229723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.229774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.229802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.234230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.234281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.234294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.238713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.238763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.238776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.243158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.243209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.243222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.247685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.247720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.247733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.252261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.252310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.252340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.256907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.256958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.256971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.261510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.261544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.261556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.266070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.266106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.266119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.270461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.270498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.270511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.275013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.275050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.275063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.160 [2024-07-15 17:07:24.279464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.160 [2024-07-15 17:07:24.279509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.160 [2024-07-15 17:07:24.279523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.283970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.284018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.284031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.288659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.288694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.288706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.293101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.293151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.293164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.297593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.297619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.297632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.301987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.302023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.302036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.306472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.306506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.306519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.310974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.311024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.311038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.315517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.315553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.315567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.320057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.320107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.320120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.324536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.324588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.324600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.329042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.329092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.329105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.333538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.333572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.333585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.338098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.338133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.338147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.342595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.342630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.342643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.347166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.347204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.347217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.351640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.351675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.351688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.356100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.356149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.356162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.360719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.360755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.360768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.365155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.365204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.365217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.369679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.369714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.369727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.374031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.374065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.374078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.378533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.378579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.378592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.382894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.382929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.382942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.387426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.387461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.387474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.391948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.391983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.392007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.396371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.396404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.396418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.400780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.400815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.400828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.405097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.405132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.161 [2024-07-15 17:07:24.405146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.161 [2024-07-15 17:07:24.409487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.161 [2024-07-15 17:07:24.409522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-07-15 17:07:24.409534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.162 [2024-07-15 17:07:24.413869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.162 [2024-07-15 17:07:24.413905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-07-15 17:07:24.413917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.162 [2024-07-15 17:07:24.418428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.162 [2024-07-15 17:07:24.418462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-07-15 17:07:24.418475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.162 [2024-07-15 17:07:24.422892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.162 [2024-07-15 17:07:24.422931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-07-15 17:07:24.422944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.162 [2024-07-15 17:07:24.427386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.162 [2024-07-15 17:07:24.427419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-07-15 17:07:24.427432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.162 [2024-07-15 17:07:24.431726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.162 [2024-07-15 17:07:24.431760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-07-15 17:07:24.431774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.162 [2024-07-15 17:07:24.436203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.162 [2024-07-15 17:07:24.436238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-07-15 17:07:24.436251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.162 [2024-07-15 17:07:24.440756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.162 [2024-07-15 17:07:24.440793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-07-15 17:07:24.440806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.162 [2024-07-15 17:07:24.445204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.162 [2024-07-15 17:07:24.445240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-07-15 17:07:24.445253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.162 [2024-07-15 17:07:24.449605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.162 [2024-07-15 17:07:24.449642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-07-15 17:07:24.449655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.162 [2024-07-15 17:07:24.453922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.162 [2024-07-15 17:07:24.453957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.162 [2024-07-15 17:07:24.453970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.458278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.458327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.458340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.462698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.462732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.462746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.467167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.467201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.467214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.471760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.471795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.471808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.476192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.476227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.476241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.480635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.480670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.480682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.484998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.485033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.485046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.489300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.489335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.489348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.493592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.493627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.493640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.498064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.498099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.498112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.502516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.502551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.502564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.506805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.506839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.506852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.511216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.511252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.511264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.515743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.515778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.515792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.520194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.520231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.520243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.524690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.524726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.524739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.529127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.529163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.529176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.422 [2024-07-15 17:07:24.533563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.422 [2024-07-15 17:07:24.533597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.422 [2024-07-15 17:07:24.533611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.537893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.537942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.537955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.542292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.542328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.542341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.546690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.546725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.546738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.551103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.551153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.551166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.555711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.555746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.555759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.560098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.560133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.560146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.564457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.564491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.564503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.568993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.569044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.569057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.573461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.573510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.573523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.577862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.577912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.577926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.582264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.582314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.582328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.586690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.586725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.586738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.591091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.591142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.591155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.595607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.595641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.595654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.600111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.600161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.600174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.604676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.604711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.604724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.609335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.609399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.609413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.613845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.613894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.613907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.618322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.618382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.618396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.622714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.622749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.622762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.627240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.627290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.627303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.631895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.631945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.631959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.636421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.636455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.636468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.640891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.640926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.640939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.645387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.645420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.645433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.649897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.649933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.649946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.654271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.654305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.654318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.658669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.658703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.658716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.663151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.663187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.663199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.667683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.667718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.667730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.672044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.672080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.672093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.676437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.676471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.676484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.680854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.423 [2024-07-15 17:07:24.680896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.423 [2024-07-15 17:07:24.680909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.423 [2024-07-15 17:07:24.685277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.424 [2024-07-15 17:07:24.685327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.424 [2024-07-15 17:07:24.685340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.424 [2024-07-15 17:07:24.689677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.424 [2024-07-15 17:07:24.689727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.424 [2024-07-15 17:07:24.689740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.424 [2024-07-15 17:07:24.694091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.424 [2024-07-15 17:07:24.694141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.424 [2024-07-15 17:07:24.694154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.424 [2024-07-15 17:07:24.698565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.424 [2024-07-15 17:07:24.698599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.424 [2024-07-15 17:07:24.698611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.424 [2024-07-15 17:07:24.703059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.424 [2024-07-15 17:07:24.703111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.424 [2024-07-15 17:07:24.703124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.424 [2024-07-15 17:07:24.707482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.424 [2024-07-15 17:07:24.707539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.424 [2024-07-15 17:07:24.707551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.424 [2024-07-15 17:07:24.712038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.424 [2024-07-15 17:07:24.712087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.424 [2024-07-15 17:07:24.712100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.424 [2024-07-15 17:07:24.716471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.424 [2024-07-15 17:07:24.716505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.424 [2024-07-15 17:07:24.716518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.683 [2024-07-15 17:07:24.720976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.683 [2024-07-15 17:07:24.721026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.683 [2024-07-15 17:07:24.721038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.683 [2024-07-15 17:07:24.725312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.683 [2024-07-15 17:07:24.725349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.683 [2024-07-15 17:07:24.725376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.683 [2024-07-15 17:07:24.729636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.683 [2024-07-15 17:07:24.729670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.683 [2024-07-15 17:07:24.729683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.683 [2024-07-15 17:07:24.734105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.683 [2024-07-15 17:07:24.734155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.683 [2024-07-15 17:07:24.734169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.683 [2024-07-15 17:07:24.738417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.683 [2024-07-15 17:07:24.738462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.683 [2024-07-15 17:07:24.738475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.683 [2024-07-15 17:07:24.742798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.683 [2024-07-15 17:07:24.742847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.683 [2024-07-15 17:07:24.742860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.683 [2024-07-15 17:07:24.747414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.683 [2024-07-15 17:07:24.747467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.683 [2024-07-15 17:07:24.747480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.683 [2024-07-15 17:07:24.751926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.683 [2024-07-15 17:07:24.751961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.683 [2024-07-15 17:07:24.751974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.683 [2024-07-15 17:07:24.756280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.683 [2024-07-15 17:07:24.756315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.683 [2024-07-15 17:07:24.756328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.683 [2024-07-15 17:07:24.760645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.683 [2024-07-15 17:07:24.760680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.760693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.765010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.765045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.765058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.769559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.769594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.769607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.773909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.773944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.773957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.778348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.778394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.778407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.782760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.782795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.782808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.787271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.787306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.787320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.791720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.791755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.791768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.796046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.796081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.796094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.800483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.800517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.800529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.804993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.805028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.805041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.809519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.809553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.809566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.813973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.814009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.814021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.818280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.818315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.818328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.822645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.822688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.822701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.827159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.827194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.827207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.831798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.831833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.831846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.836270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.836305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.836318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.840670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.840704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.840717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.845063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.845117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.845130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.849606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.849641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.849653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.854078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.854113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.854126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.858510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.858544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.858557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.862969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.863019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.863032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.867470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.867527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.867541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.871848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.871883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.871896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.876285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.876334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.876347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.880687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.880723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.880736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.885240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.885291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.684 [2024-07-15 17:07:24.885305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.684 [2024-07-15 17:07:24.889762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.684 [2024-07-15 17:07:24.889813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.889826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.894180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.894230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.894242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.898727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.898762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.898775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.903301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.903351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.903365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.907707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.907742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.907755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.912227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.912278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.912291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.916759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.916824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.916837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.921247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.921298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.921310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.925662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.925711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.925724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.930130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.930181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.930194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.934591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.934626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.934639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.939054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.939087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.939116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.943712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.943747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.943760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.948143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.948193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.948206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.952656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.952706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.952719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.957134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.957184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.957196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.961513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.961561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.961573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.965891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.965941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.965955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.970236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.970286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.970298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.974653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.974703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.974716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.685 [2024-07-15 17:07:24.979250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.685 [2024-07-15 17:07:24.979301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.685 [2024-07-15 17:07:24.979329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:24.983674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:24.983710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:24.983723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:24.988256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:24.988305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:24.988318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:24.992713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:24.992763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:24.992790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:24.997207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:24.997257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:24.997269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.001757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.001792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.001805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.006326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.006390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.006405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.010826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.010861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.010874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.015172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.015204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.015216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.019595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.019629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.019642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.024159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.024209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.024221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.028720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.028754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.028768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.033327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.033375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.033388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.037736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.037801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.037814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.042260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.042295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.042308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.046666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.046700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.046712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.051134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.051185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.051198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.055422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.055457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.055469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.059920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.059956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.059968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.064352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.064412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.946 [2024-07-15 17:07:25.064425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.946 [2024-07-15 17:07:25.068889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.946 [2024-07-15 17:07:25.068939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.068952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.073454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.073503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.073516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.077901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.077951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.077963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.082277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.082313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.082325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.086704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.086739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.086751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.091165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.091200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.091213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.095651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.095686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.095698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.099997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.100046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.100058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.104417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.104476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.104489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.109094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.109162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.109175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.113831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.113881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.113894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.118388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.118434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.118447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.123092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.123142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.123155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.127604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.127638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.127650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.132340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.132399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.132413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.136980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.137029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.137042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.141582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.141616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.141628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.146204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.146253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.146282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.947 [2024-07-15 17:07:25.150872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.947 [2024-07-15 17:07:25.150907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.947 [2024-07-15 17:07:25.150920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.155441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.155474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.155487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.159941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.159977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.159989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.164739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.164773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.164786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.169394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.169459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.169472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.173952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.173989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.174002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.178311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.178346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.178372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.182767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.182801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.182814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.187176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.187226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.187239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.191939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.191973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.191986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.196352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.196399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.196412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.200844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.200886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.200899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.205476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.205510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.205523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.210029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.210064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.210078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.214456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.214505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.214518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.218959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.218994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.219006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.223648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.223683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.223695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.228177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.228212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.228224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.232668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.232703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.232716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.948 [2024-07-15 17:07:25.237190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.948 [2024-07-15 17:07:25.237225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.948 [2024-07-15 17:07:25.237238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.949 [2024-07-15 17:07:25.241660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:34.949 [2024-07-15 17:07:25.241710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.949 [2024-07-15 17:07:25.241723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.246198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.246248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.246261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.250573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.250608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.250621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.255026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.255061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.255073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.259661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.259697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.259709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.264273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.264310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.264322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.268739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.268789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.268802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.273292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.273327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.273340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.277928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.277963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.277976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.282386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.282435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.282448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.286913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.286964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.286977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.291450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.291485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.291498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.296032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.296067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.296080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.300589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.300624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.208 [2024-07-15 17:07:25.300636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.208 [2024-07-15 17:07:25.305015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.208 [2024-07-15 17:07:25.305050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.305063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.309516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.309550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.309562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.314113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.314148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.314161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.318619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.318669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.318682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.323181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.323216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.323229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.327666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.327701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.327713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.332129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.332164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.332177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.336676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.336711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.336723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.341157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.341192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.341205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.345672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.345709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.345722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.350079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.350115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.350128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.354583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.354618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.354631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.359065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.359100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.359113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.363711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.363746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.363759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.368158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.368193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.368205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.372555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.372589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.372601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.376905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.376960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.376974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.381461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.381495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.381508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.386001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.386035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.386048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.390487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.390523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.390536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.394937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.394972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.394985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.399464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.399524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.399537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.404025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.404062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.404075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.408458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.408492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.408505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.413023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.413058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.413071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.417374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.417407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.417420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.421775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.421809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.421823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.426268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.209 [2024-07-15 17:07:25.426310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.209 [2024-07-15 17:07:25.426323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.209 [2024-07-15 17:07:25.430687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.430722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.430735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.435215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.435251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.435264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.439676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.439710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.439723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.444142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.444178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.444191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.448695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.448745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.448758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.453156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.453191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.453204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.457780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.457815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.457828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.462026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.462061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.462073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.466503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.466538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.466551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.470834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.470869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.470882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.475267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.475302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.475315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.479709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.479746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.479759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.484131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.484167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.484179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.488548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.488582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.488594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.492958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.492994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.493007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.497256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.497293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.497305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.210 [2024-07-15 17:07:25.501579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.210 [2024-07-15 17:07:25.501611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.210 [2024-07-15 17:07:25.501624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.469 [2024-07-15 17:07:25.506010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.469 [2024-07-15 17:07:25.506045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.469 [2024-07-15 17:07:25.506058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.469 [2024-07-15 17:07:25.510458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.469 [2024-07-15 17:07:25.510493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.469 [2024-07-15 17:07:25.510506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.469 [2024-07-15 17:07:25.514840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.469 [2024-07-15 17:07:25.514875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.469 [2024-07-15 17:07:25.514888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.469 [2024-07-15 17:07:25.519401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.469 [2024-07-15 17:07:25.519435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.469 [2024-07-15 17:07:25.519448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.469 [2024-07-15 17:07:25.523909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.469 [2024-07-15 17:07:25.523945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.469 [2024-07-15 17:07:25.523958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.469 [2024-07-15 17:07:25.528438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.469 [2024-07-15 17:07:25.528472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.469 [2024-07-15 17:07:25.528485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.469 [2024-07-15 17:07:25.532803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.469 [2024-07-15 17:07:25.532838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.469 [2024-07-15 17:07:25.532851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.469 [2024-07-15 17:07:25.537311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.469 [2024-07-15 17:07:25.537347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.537374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.541684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.541718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.541731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.546092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.546128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.546142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.550524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.550558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.550571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.554895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.554930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.554942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.559289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.559324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.559337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.563752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.563789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.563802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.568208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.568242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.568255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.572608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.572638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.572650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.577080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.577115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.577129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.581534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.581569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.581581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.586073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.586108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.586121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.590602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.590645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.590664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.595278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.595323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.595341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.599854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.599891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.599904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.604270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.604306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.604319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.608609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.608644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.608657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.613120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.613156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.613169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.617574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.617609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.617622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.621935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.621971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.621983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.626438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.626472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.626485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.630756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.630797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.630810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.635243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.635278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.635292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.639615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.639649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.639662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.643951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.643986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.643998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.470 [2024-07-15 17:07:25.648231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.470 [2024-07-15 17:07:25.648265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.470 [2024-07-15 17:07:25.648278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.652692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.652727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.652739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.657166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.657203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.657216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.661675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.661708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.661721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.666082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.666117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.666130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.670525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.670560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.670572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.674956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.674991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.675004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.679497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.679542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.679554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.683965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.684003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.684017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.688399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.688430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.688463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.692852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.692888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.692901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.697117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.697153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.697166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.701571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.701603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.701616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.706002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.706052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.706065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.710490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.710525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.710538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.714940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.714976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.714989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.719519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.719552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.719565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.724064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.724100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.724114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.728728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.728763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.728776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.733325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.733375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.733390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.737793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.737828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.737840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.742329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.742376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.742390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.746935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.746971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.746984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.751517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.751550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.751563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.756002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.756037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.756050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.760612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.760647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.760661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.471 [2024-07-15 17:07:25.765000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.471 [2024-07-15 17:07:25.765034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.471 [2024-07-15 17:07:25.765047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.731 [2024-07-15 17:07:25.769554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.731 [2024-07-15 17:07:25.769588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.731 [2024-07-15 17:07:25.769601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.731 [2024-07-15 17:07:25.774041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.731 [2024-07-15 17:07:25.774076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.731 [2024-07-15 17:07:25.774089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.731 [2024-07-15 17:07:25.778606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.731 [2024-07-15 17:07:25.778641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.731 [2024-07-15 17:07:25.778654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.731 [2024-07-15 17:07:25.783270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.731 [2024-07-15 17:07:25.783321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.731 [2024-07-15 17:07:25.783334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.731 [2024-07-15 17:07:25.787886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.731 [2024-07-15 17:07:25.787921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.731 [2024-07-15 17:07:25.787935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.731 [2024-07-15 17:07:25.792500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.731 [2024-07-15 17:07:25.792552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.731 [2024-07-15 17:07:25.792566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.731 [2024-07-15 17:07:25.796989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.731 [2024-07-15 17:07:25.797040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.731 [2024-07-15 17:07:25.797053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.731 [2024-07-15 17:07:25.801310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.731 [2024-07-15 17:07:25.801349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.731 [2024-07-15 17:07:25.801377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.731 [2024-07-15 17:07:25.805734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.805769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.805782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.810101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.810137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.810149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.814707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.814742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.814755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.819214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.819266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.819279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.823742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.823776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.823789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.828361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.828406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.828419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.832887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.832922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.832935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.837409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.837443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.837455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.841995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.842047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.842060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.846737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.846773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.846786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.851191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.851226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.851239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.855720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.855756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.855769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.860217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.860253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.860266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.864655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.864689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.864703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.869159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.869194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.869207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.873631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.873667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.873680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.878045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.878080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.878093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.882648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.882683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.882697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.887120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.887155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.887167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.891691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.891726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.891738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.896145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.896179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.896192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.900684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.900734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.900747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.905172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.905207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.905219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.909612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.732 [2024-07-15 17:07:25.909647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.732 [2024-07-15 17:07:25.909660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.732 [2024-07-15 17:07:25.914079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.914113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.914125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.918534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.918568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.918581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.922972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.923008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.923021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.927624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.927658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.927671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.932205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.932240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.932253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.936711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.936745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.936759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.941203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.941238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.941251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.945563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.945597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.945609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.949990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.950025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.950038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.954558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.954593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.954605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.959083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.959118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.959131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.963542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.963576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.963589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.968022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.968057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.968071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.972387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.972421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.972434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.976857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.976893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.976905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.981290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.981325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.981338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.985724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.985759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.985772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.990217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.990254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.990267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.994776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.994812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.994824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:25.999310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:25.999345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:25.999371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:26.003853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:26.003888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:26.003901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:26.008262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:26.008299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:26.008312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:26.012597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:26.012631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:26.012645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:26.016994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:26.017030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:26.017042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:26.021373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:26.021404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:26.021417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.733 [2024-07-15 17:07:26.025733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.733 [2024-07-15 17:07:26.025768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.733 [2024-07-15 17:07:26.025780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.992 [2024-07-15 17:07:26.030194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.992 [2024-07-15 17:07:26.030229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.992 [2024-07-15 17:07:26.030242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.992 [2024-07-15 17:07:26.035011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.992 [2024-07-15 17:07:26.035047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.992 [2024-07-15 17:07:26.035060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.992 [2024-07-15 17:07:26.039475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.992 [2024-07-15 17:07:26.039518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.992 [2024-07-15 17:07:26.039531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.992 [2024-07-15 17:07:26.043886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10eaac0) 00:18:35.992 [2024-07-15 17:07:26.043921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.992 [2024-07-15 17:07:26.043935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.992 00:18:35.992 Latency(us) 00:18:35.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.992 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:35.992 nvme0n1 : 2.00 6888.58 861.07 0.00 0.00 2319.25 1995.87 9830.40 00:18:35.992 =================================================================================================================== 00:18:35.992 Total : 6888.58 861.07 0.00 0.00 2319.25 1995.87 9830.40 00:18:35.992 0 00:18:35.992 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:35.992 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:35.992 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:35.992 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:35.992 | .driver_specific 00:18:35.992 | .nvme_error 00:18:35.992 | .status_code 00:18:35.992 | .command_transient_transport_error' 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 444 > 0 )) 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80469 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80469 ']' 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80469 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80469 00:18:36.250 killing process with pid 80469 00:18:36.250 Received shutdown signal, test time was about 2.000000 seconds 00:18:36.250 00:18:36.250 Latency(us) 00:18:36.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.250 =================================================================================================================== 00:18:36.250 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80469' 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80469 00:18:36.250 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80469 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80524 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80524 /var/tmp/bperf.sock 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80524 ']' 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:36.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.508 17:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:36.508 [2024-07-15 17:07:26.637269] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:36.508 [2024-07-15 17:07:26.637376] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80524 ] 00:18:36.508 [2024-07-15 17:07:26.771241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.766 [2024-07-15 17:07:26.917178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.766 [2024-07-15 17:07:26.973706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:37.700 17:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.700 17:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:37.700 17:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:37.700 17:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:37.959 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:37.959 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.959 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:37.959 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.959 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:37.959 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:38.218 nvme0n1 00:18:38.218 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:38.218 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.218 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:38.218 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.218 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:38.218 17:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:38.218 Running I/O for 2 seconds... 00:18:38.218 [2024-07-15 17:07:28.499552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190fef90 00:18:38.218 [2024-07-15 17:07:28.502279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.218 [2024-07-15 17:07:28.502355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:38.477 [2024-07-15 17:07:28.516860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190feb58 00:18:38.477 [2024-07-15 17:07:28.519701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.477 [2024-07-15 17:07:28.519744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:38.477 [2024-07-15 17:07:28.533259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190fe2e8 00:18:38.477 [2024-07-15 17:07:28.535994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.477 [2024-07-15 17:07:28.536062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:38.477 [2024-07-15 17:07:28.550200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190fda78 00:18:38.477 [2024-07-15 17:07:28.552865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.477 [2024-07-15 17:07:28.552919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:38.477 [2024-07-15 17:07:28.567292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190fd208 00:18:38.477 [2024-07-15 17:07:28.569986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.477 [2024-07-15 17:07:28.570041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:38.477 [2024-07-15 17:07:28.583803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190fc998 00:18:38.477 [2024-07-15 17:07:28.586537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.477 [2024-07-15 17:07:28.586588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:38.478 [2024-07-15 17:07:28.601341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190fc128 00:18:38.478 [2024-07-15 17:07:28.604067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.478 [2024-07-15 17:07:28.604121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:38.478 [2024-07-15 17:07:28.618342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190fb8b8 00:18:38.478 [2024-07-15 17:07:28.620901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.478 [2024-07-15 17:07:28.620955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:38.478 [2024-07-15 17:07:28.634336] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190fb048 00:18:38.478 [2024-07-15 17:07:28.636916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.478 [2024-07-15 17:07:28.636969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:38.478 [2024-07-15 17:07:28.650812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190fa7d8 00:18:38.478 [2024-07-15 17:07:28.653303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.478 [2024-07-15 17:07:28.653352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:38.478 [2024-07-15 17:07:28.667117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f9f68 00:18:38.478 [2024-07-15 17:07:28.669706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.478 [2024-07-15 17:07:28.669757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:38.478 [2024-07-15 17:07:28.683778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f96f8 00:18:38.478 [2024-07-15 17:07:28.686208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.478 [2024-07-15 17:07:28.686259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:38.478 [2024-07-15 17:07:28.700189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f8e88 00:18:38.478 [2024-07-15 17:07:28.702714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.478 [2024-07-15 17:07:28.702750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:38.478 [2024-07-15 17:07:28.717507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f8618 00:18:38.478 [2024-07-15 17:07:28.719944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.478 [2024-07-15 17:07:28.719985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:38.478 [2024-07-15 17:07:28.734342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f7da8 00:18:38.478 [2024-07-15 17:07:28.736799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.478 [2024-07-15 17:07:28.736840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:38.478 [2024-07-15 17:07:28.751313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f7538 00:18:38.478 [2024-07-15 17:07:28.753774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.478 [2024-07-15 17:07:28.753811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:38.478 [2024-07-15 17:07:28.768124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f6cc8 00:18:38.478 [2024-07-15 17:07:28.770654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.478 [2024-07-15 17:07:28.770709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.785162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f6458 00:18:38.737 [2024-07-15 17:07:28.787681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.787722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.801650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f5be8 00:18:38.737 [2024-07-15 17:07:28.803975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.804009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.818307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f5378 00:18:38.737 [2024-07-15 17:07:28.820762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.820805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.835306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f4b08 00:18:38.737 [2024-07-15 17:07:28.837615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.837656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.851995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f4298 00:18:38.737 [2024-07-15 17:07:28.854338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.854385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.868535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f3a28 00:18:38.737 [2024-07-15 17:07:28.870862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.870909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.885558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f31b8 00:18:38.737 [2024-07-15 17:07:28.887884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.887924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.902618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f2948 00:18:38.737 [2024-07-15 17:07:28.904950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.905002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.919927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f20d8 00:18:38.737 [2024-07-15 17:07:28.922195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.922234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.936879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f1868 00:18:38.737 [2024-07-15 17:07:28.939148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.939187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.953600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f0ff8 00:18:38.737 [2024-07-15 17:07:28.955759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.955796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.970439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f0788 00:18:38.737 [2024-07-15 17:07:28.972598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.972639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:28.987129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190eff18 00:18:38.737 [2024-07-15 17:07:28.989264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:28.989319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:29.003752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ef6a8 00:18:38.737 [2024-07-15 17:07:29.005817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:29.005865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:38.737 [2024-07-15 17:07:29.020209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190eee38 00:18:38.737 [2024-07-15 17:07:29.022259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.737 [2024-07-15 17:07:29.022300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.036882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ee5c8 00:18:38.995 [2024-07-15 17:07:29.038929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.995 [2024-07-15 17:07:29.038970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.053633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190edd58 00:18:38.995 [2024-07-15 17:07:29.055750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.995 [2024-07-15 17:07:29.055789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.070543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ed4e8 00:18:38.995 [2024-07-15 17:07:29.072618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.995 [2024-07-15 17:07:29.072660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.087450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ecc78 00:18:38.995 [2024-07-15 17:07:29.089453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.995 [2024-07-15 17:07:29.089492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.104201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ec408 00:18:38.995 [2024-07-15 17:07:29.106170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.995 [2024-07-15 17:07:29.106208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.120920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ebb98 00:18:38.995 [2024-07-15 17:07:29.122979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.995 [2024-07-15 17:07:29.123018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.137903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190eb328 00:18:38.995 [2024-07-15 17:07:29.139848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.995 [2024-07-15 17:07:29.139886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.154637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190eaab8 00:18:38.995 [2024-07-15 17:07:29.156563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.995 [2024-07-15 17:07:29.156611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.171325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ea248 00:18:38.995 [2024-07-15 17:07:29.173228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.995 [2024-07-15 17:07:29.173274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.188027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e99d8 00:18:38.995 [2024-07-15 17:07:29.189952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.995 [2024-07-15 17:07:29.189989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.204626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e9168 00:18:38.995 [2024-07-15 17:07:29.206459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.995 [2024-07-15 17:07:29.206497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:38.995 [2024-07-15 17:07:29.221244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e88f8 00:18:38.995 [2024-07-15 17:07:29.223130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.996 [2024-07-15 17:07:29.223167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:38.996 [2024-07-15 17:07:29.238058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e8088 00:18:38.996 [2024-07-15 17:07:29.239887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.996 [2024-07-15 17:07:29.239926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:38.996 [2024-07-15 17:07:29.254519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e7818 00:18:38.996 [2024-07-15 17:07:29.256314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.996 [2024-07-15 17:07:29.256366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:38.996 [2024-07-15 17:07:29.271068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e6fa8 00:18:38.996 [2024-07-15 17:07:29.272911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.996 [2024-07-15 17:07:29.272950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:38.996 [2024-07-15 17:07:29.287934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e6738 00:18:38.996 [2024-07-15 17:07:29.289765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:38.996 [2024-07-15 17:07:29.289814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:39.254 [2024-07-15 17:07:29.304938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e5ec8 00:18:39.254 [2024-07-15 17:07:29.306687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.254 [2024-07-15 17:07:29.306727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.254 [2024-07-15 17:07:29.321810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e5658 00:18:39.254 [2024-07-15 17:07:29.323541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.254 [2024-07-15 17:07:29.323578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:39.254 [2024-07-15 17:07:29.338491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e4de8 00:18:39.254 [2024-07-15 17:07:29.340186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.254 [2024-07-15 17:07:29.340223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:39.254 [2024-07-15 17:07:29.355082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e4578 00:18:39.254 [2024-07-15 17:07:29.356766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.254 [2024-07-15 17:07:29.356803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:39.254 [2024-07-15 17:07:29.371897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e3d08 00:18:39.254 [2024-07-15 17:07:29.373542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.254 [2024-07-15 17:07:29.373582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:39.254 [2024-07-15 17:07:29.388565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e3498 00:18:39.254 [2024-07-15 17:07:29.390190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.254 [2024-07-15 17:07:29.390230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:39.254 [2024-07-15 17:07:29.405448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e2c28 00:18:39.254 [2024-07-15 17:07:29.407050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.254 [2024-07-15 17:07:29.407092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:39.255 [2024-07-15 17:07:29.422226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e23b8 00:18:39.255 [2024-07-15 17:07:29.423858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.255 [2024-07-15 17:07:29.423896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:39.255 [2024-07-15 17:07:29.439429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e1b48 00:18:39.255 [2024-07-15 17:07:29.441049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.255 [2024-07-15 17:07:29.441093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:39.255 [2024-07-15 17:07:29.456508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e12d8 00:18:39.255 [2024-07-15 17:07:29.458093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.255 [2024-07-15 17:07:29.458152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:39.255 [2024-07-15 17:07:29.473712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e0a68 00:18:39.255 [2024-07-15 17:07:29.475279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.255 [2024-07-15 17:07:29.475335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:39.255 [2024-07-15 17:07:29.491052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e01f8 00:18:39.255 [2024-07-15 17:07:29.492632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.255 [2024-07-15 17:07:29.492688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:39.255 [2024-07-15 17:07:29.508346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190df988 00:18:39.255 [2024-07-15 17:07:29.509863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.255 [2024-07-15 17:07:29.509897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:39.255 [2024-07-15 17:07:29.524953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190df118 00:18:39.255 [2024-07-15 17:07:29.526460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.255 [2024-07-15 17:07:29.526494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:39.255 [2024-07-15 17:07:29.541861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190de8a8 00:18:39.255 [2024-07-15 17:07:29.543314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.255 [2024-07-15 17:07:29.543373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.558769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190de038 00:18:39.514 [2024-07-15 17:07:29.560213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.560268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.582317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190de038 00:18:39.514 [2024-07-15 17:07:29.585054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.585117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.599210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190de8a8 00:18:39.514 [2024-07-15 17:07:29.601983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.602023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.615856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190df118 00:18:39.514 [2024-07-15 17:07:29.618557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.618598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.632734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190df988 00:18:39.514 [2024-07-15 17:07:29.635329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.635392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.649931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e01f8 00:18:39.514 [2024-07-15 17:07:29.652626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.652667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.666784] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e0a68 00:18:39.514 [2024-07-15 17:07:29.669415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.669455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.683692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e12d8 00:18:39.514 [2024-07-15 17:07:29.686244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.686282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.700564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e1b48 00:18:39.514 [2024-07-15 17:07:29.703165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.703212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.717860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e23b8 00:18:39.514 [2024-07-15 17:07:29.720449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.720493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.734918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e2c28 00:18:39.514 [2024-07-15 17:07:29.737461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.737520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.752172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e3498 00:18:39.514 [2024-07-15 17:07:29.754699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.754748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.769211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e3d08 00:18:39.514 [2024-07-15 17:07:29.771702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.771751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.786242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e4578 00:18:39.514 [2024-07-15 17:07:29.788779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.788829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:39.514 [2024-07-15 17:07:29.803282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e4de8 00:18:39.514 [2024-07-15 17:07:29.805765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.514 [2024-07-15 17:07:29.805809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:29.820531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e5658 00:18:39.773 [2024-07-15 17:07:29.822941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:29.822981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:29.837126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e5ec8 00:18:39.773 [2024-07-15 17:07:29.839531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:29.839575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:29.854311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e6738 00:18:39.773 [2024-07-15 17:07:29.856742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:29.856787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:29.871388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e6fa8 00:18:39.773 [2024-07-15 17:07:29.873750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:29.873793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:29.888408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e7818 00:18:39.773 [2024-07-15 17:07:29.890738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:29.890777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:29.904944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e8088 00:18:39.773 [2024-07-15 17:07:29.907241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:29.907285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:29.921992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e88f8 00:18:39.773 [2024-07-15 17:07:29.924313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:29.924366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:29.938792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e9168 00:18:39.773 [2024-07-15 17:07:29.941112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:29.941171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:29.955974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190e99d8 00:18:39.773 [2024-07-15 17:07:29.958286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:29.958327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:29.972424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ea248 00:18:39.773 [2024-07-15 17:07:29.974615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:29.974654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:29.989118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190eaab8 00:18:39.773 [2024-07-15 17:07:29.991333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:29.991393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:30.006796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190eb328 00:18:39.773 [2024-07-15 17:07:30.009124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:30.009171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:30.023819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ebb98 00:18:39.773 [2024-07-15 17:07:30.026059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:30.026111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:30.040175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ec408 00:18:39.773 [2024-07-15 17:07:30.042382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:30.042447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:39.773 [2024-07-15 17:07:30.056645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ecc78 00:18:39.773 [2024-07-15 17:07:30.058814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.773 [2024-07-15 17:07:30.058869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.073638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ed4e8 00:18:40.032 [2024-07-15 17:07:30.075831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.075886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.089708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190edd58 00:18:40.032 [2024-07-15 17:07:30.091799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.091835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.106191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ee5c8 00:18:40.032 [2024-07-15 17:07:30.108346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.108394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.123174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190eee38 00:18:40.032 [2024-07-15 17:07:30.125225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.125277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.140892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190ef6a8 00:18:40.032 [2024-07-15 17:07:30.142972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.143022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.157790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190eff18 00:18:40.032 [2024-07-15 17:07:30.159863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.159899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.174175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f0788 00:18:40.032 [2024-07-15 17:07:30.176302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.176353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.190578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f0ff8 00:18:40.032 [2024-07-15 17:07:30.192635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.192687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.207581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f1868 00:18:40.032 [2024-07-15 17:07:30.209596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.209642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.225042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f20d8 00:18:40.032 [2024-07-15 17:07:30.227009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.227078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.241972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f2948 00:18:40.032 [2024-07-15 17:07:30.243945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.243980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.258468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f31b8 00:18:40.032 [2024-07-15 17:07:30.260320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.260381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.274840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f3a28 00:18:40.032 [2024-07-15 17:07:30.276762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.276824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.291814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f4298 00:18:40.032 [2024-07-15 17:07:30.293658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.293696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.308793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f4b08 00:18:40.032 [2024-07-15 17:07:30.310674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.310709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:40.032 [2024-07-15 17:07:30.325559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f5378 00:18:40.032 [2024-07-15 17:07:30.327363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.032 [2024-07-15 17:07:30.327408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:40.291 [2024-07-15 17:07:30.341922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f5be8 00:18:40.291 [2024-07-15 17:07:30.343721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.291 [2024-07-15 17:07:30.343757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:40.291 [2024-07-15 17:07:30.358817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f6458 00:18:40.291 [2024-07-15 17:07:30.360611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.291 [2024-07-15 17:07:30.360650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:40.291 [2024-07-15 17:07:30.375616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f6cc8 00:18:40.291 [2024-07-15 17:07:30.377350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.291 [2024-07-15 17:07:30.377379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.292 [2024-07-15 17:07:30.392166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f7538 00:18:40.292 [2024-07-15 17:07:30.393860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.292 [2024-07-15 17:07:30.393889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:40.292 [2024-07-15 17:07:30.408465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f7da8 00:18:40.292 [2024-07-15 17:07:30.410169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.292 [2024-07-15 17:07:30.410220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:40.292 [2024-07-15 17:07:30.425139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f8618 00:18:40.292 [2024-07-15 17:07:30.426865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.292 [2024-07-15 17:07:30.426917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:40.292 [2024-07-15 17:07:30.441666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f8e88 00:18:40.292 [2024-07-15 17:07:30.443312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.292 [2024-07-15 17:07:30.443364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:40.292 [2024-07-15 17:07:30.458057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f96f8 00:18:40.292 [2024-07-15 17:07:30.459685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.292 [2024-07-15 17:07:30.459720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:40.292 [2024-07-15 17:07:30.474371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea360) with pdu=0x2000190f9f68 00:18:40.292 [2024-07-15 17:07:30.476010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.292 [2024-07-15 17:07:30.476046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:40.292 00:18:40.292 Latency(us) 00:18:40.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.292 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.292 nvme0n1 : 2.00 15033.20 58.72 0.00 0.00 8506.68 6106.76 32648.84 00:18:40.292 =================================================================================================================== 00:18:40.292 Total : 15033.20 58.72 0.00 0.00 8506.68 6106.76 32648.84 00:18:40.292 0 00:18:40.292 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:40.292 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:40.292 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:40.292 | .driver_specific 00:18:40.292 | .nvme_error 00:18:40.292 | .status_code 00:18:40.292 | .command_transient_transport_error' 00:18:40.292 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:40.549 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:18:40.549 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80524 00:18:40.549 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80524 ']' 00:18:40.549 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80524 00:18:40.549 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:40.549 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.549 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80524 00:18:40.549 killing process with pid 80524 00:18:40.549 Received shutdown signal, test time was about 2.000000 seconds 00:18:40.549 00:18:40.549 Latency(us) 00:18:40.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.549 =================================================================================================================== 00:18:40.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.549 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:40.549 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:40.549 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80524' 00:18:40.549 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80524 00:18:40.550 17:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80524 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80584 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80584 /var/tmp/bperf.sock 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80584 ']' 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:40.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.807 17:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:40.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:40.807 Zero copy mechanism will not be used. 00:18:40.807 [2024-07-15 17:07:31.095143] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:40.807 [2024-07-15 17:07:31.095229] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80584 ] 00:18:41.065 [2024-07-15 17:07:31.232775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.065 [2024-07-15 17:07:31.343172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.322 [2024-07-15 17:07:31.396091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:41.888 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.888 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:41.888 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:41.888 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:42.146 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:42.146 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.146 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:42.146 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.146 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:42.146 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:42.403 nvme0n1 00:18:42.403 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:42.403 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.403 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:42.403 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.403 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:42.403 17:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:42.661 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:42.661 Zero copy mechanism will not be used. 00:18:42.661 Running I/O for 2 seconds... 00:18:42.661 [2024-07-15 17:07:32.813843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.814157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.814187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.819429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.819744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.819768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.824949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.825244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.825267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.830438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.830738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.830766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.835955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.836252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.836281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.841424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.841720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.841744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.846843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.847136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.847165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.852253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.852564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.852592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.857686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.857981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.858010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.863040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.863338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.863377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.868480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.868773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.868803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.873820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.874117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.874146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.879216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.879535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.879562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.884598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.884893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.884920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.890016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.890310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.890337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.895438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.895744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.895771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.900818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.901124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.661 [2024-07-15 17:07:32.901153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.661 [2024-07-15 17:07:32.906152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.661 [2024-07-15 17:07:32.906475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.662 [2024-07-15 17:07:32.906503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.662 [2024-07-15 17:07:32.911544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.662 [2024-07-15 17:07:32.911843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.662 [2024-07-15 17:07:32.911871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.662 [2024-07-15 17:07:32.916963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.662 [2024-07-15 17:07:32.917274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.662 [2024-07-15 17:07:32.917302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.662 [2024-07-15 17:07:32.922421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.662 [2024-07-15 17:07:32.922743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.662 [2024-07-15 17:07:32.922776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.662 [2024-07-15 17:07:32.927857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.662 [2024-07-15 17:07:32.928149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.662 [2024-07-15 17:07:32.928178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.662 [2024-07-15 17:07:32.933247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.662 [2024-07-15 17:07:32.933558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.662 [2024-07-15 17:07:32.933586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.662 [2024-07-15 17:07:32.938609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.662 [2024-07-15 17:07:32.938913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.662 [2024-07-15 17:07:32.938942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.662 [2024-07-15 17:07:32.944019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.662 [2024-07-15 17:07:32.944314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.662 [2024-07-15 17:07:32.944343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.662 [2024-07-15 17:07:32.949386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.662 [2024-07-15 17:07:32.949680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.662 [2024-07-15 17:07:32.949707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.662 [2024-07-15 17:07:32.954751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.662 [2024-07-15 17:07:32.955045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.662 [2024-07-15 17:07:32.955074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:32.960291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:32.960602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:32.960632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:32.965637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:32.965931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:32.965960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:32.971029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:32.971325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:32.971366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:32.976508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:32.976806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:32.976833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:32.981968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:32.982261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:32.982289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:32.987356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:32.987686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:32.987712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:32.992818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:32.993133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:32.993160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:32.998219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:32.998542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:32.998569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.003623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.003918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.003945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.009006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.009306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.009333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.014589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.014884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.014912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.020018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.020327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.020368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.025494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.025811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.025839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.030939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.031233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.031261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.036411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.036704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.036725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.041831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.042137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.042165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.047244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.047589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.047616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.052726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.053020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.053048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.058108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.058429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.058456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.063642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.063938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.063965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.069099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.069412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.069440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.074488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.074781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.074810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.079889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.080181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.080209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.085332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.085641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.085668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.090781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.091079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.091106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.096204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.096509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.096537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.101552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.921 [2024-07-15 17:07:33.101845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.921 [2024-07-15 17:07:33.101872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.921 [2024-07-15 17:07:33.106916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.107224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.107250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.112385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.112696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.112723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.117738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.118060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.118088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.123205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.123534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.123562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.128651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.128949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.128977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.134051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.134357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.134396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.139444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.139748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.139776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.144831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.145138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.145166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.150196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.150520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.150547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.155645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.155938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.155964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.161052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.161361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.161399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.166418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.166712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.166739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.171842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.172138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.172167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.177254] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.177564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.177591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.182658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.182948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.182972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.188055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.188361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.188399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.193506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.193802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.193824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.198935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.199230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.199258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.204342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.204659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.204687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.209789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.210098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.210125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.922 [2024-07-15 17:07:33.215231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:42.922 [2024-07-15 17:07:33.215561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.922 [2024-07-15 17:07:33.215589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.220624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.220931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.220959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.226051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.226358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.226395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.231471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.231800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.231827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.236849] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.237143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.237172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.242140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.242501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.242529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.247616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.247938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.247965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.253188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.253522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.253550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.258608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.258927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.258953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.264148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.264493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.264516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.269568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.269885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.269912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.274858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.275156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.275183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.280216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.280546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.280573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.285648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.285980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.286003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.291109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.291432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.291459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.296584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.296884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.296910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.301900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.302192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.302219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.307237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.307574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.307602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.312618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.312919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.312945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.318160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.318499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.318521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.323569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.323878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.323900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.329003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.329302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.329329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.334562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.334886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.334913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.339991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.340326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.340366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.345493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.345817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.345843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.351089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.351429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.351456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.356616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.356910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.356937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.361949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.362275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.362303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.367414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.367730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.367758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.372901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.373233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.373260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.378536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.378834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.378861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.384057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.384365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.384402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.389512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.389829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.389851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.394902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.395203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.395231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.400339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.400663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.400689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.405701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.406000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.406027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.411005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.411329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.411366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.416480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.416777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.416803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.421921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.422238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.422265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.427305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.427640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.427667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.432788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.433087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.433114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.438331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.438637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.438664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.443779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.444083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.180 [2024-07-15 17:07:33.444109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.180 [2024-07-15 17:07:33.449257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.180 [2024-07-15 17:07:33.449572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.181 [2024-07-15 17:07:33.449600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.181 [2024-07-15 17:07:33.454576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.181 [2024-07-15 17:07:33.454874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.181 [2024-07-15 17:07:33.454900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.181 [2024-07-15 17:07:33.459950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.181 [2024-07-15 17:07:33.460266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.181 [2024-07-15 17:07:33.460292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.181 [2024-07-15 17:07:33.465413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.181 [2024-07-15 17:07:33.465720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.181 [2024-07-15 17:07:33.465748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.181 [2024-07-15 17:07:33.470761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.181 [2024-07-15 17:07:33.471058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.181 [2024-07-15 17:07:33.471085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.181 [2024-07-15 17:07:33.476333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.181 [2024-07-15 17:07:33.476641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.181 [2024-07-15 17:07:33.476669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.439 [2024-07-15 17:07:33.481962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.439 [2024-07-15 17:07:33.482256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.439 [2024-07-15 17:07:33.482292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.439 [2024-07-15 17:07:33.487447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.439 [2024-07-15 17:07:33.487761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.439 [2024-07-15 17:07:33.487789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.439 [2024-07-15 17:07:33.492917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.439 [2024-07-15 17:07:33.493228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.439 [2024-07-15 17:07:33.493251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.439 [2024-07-15 17:07:33.498319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.439 [2024-07-15 17:07:33.498644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.439 [2024-07-15 17:07:33.498677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.439 [2024-07-15 17:07:33.503821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.439 [2024-07-15 17:07:33.504119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.439 [2024-07-15 17:07:33.504147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.439 [2024-07-15 17:07:33.509268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.439 [2024-07-15 17:07:33.509573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.439 [2024-07-15 17:07:33.509606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.439 [2024-07-15 17:07:33.514656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.439 [2024-07-15 17:07:33.514951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.439 [2024-07-15 17:07:33.514979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.439 [2024-07-15 17:07:33.520070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.439 [2024-07-15 17:07:33.520379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.439 [2024-07-15 17:07:33.520408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.439 [2024-07-15 17:07:33.525496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.439 [2024-07-15 17:07:33.525796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.439 [2024-07-15 17:07:33.525825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.439 [2024-07-15 17:07:33.530967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.439 [2024-07-15 17:07:33.531272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.439 [2024-07-15 17:07:33.531299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.536441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.536751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.536778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.541797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.542095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.542122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.547216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.547553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.547580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.552639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.552938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.552964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.557991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.558289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.558315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.563469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.563778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.563805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.568915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.569200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.569226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.574251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.574584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.574611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.579670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.579964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.579991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.585013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.585313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.585340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.590339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.590648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.590674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.595868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.596162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.596191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.601216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.601543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.601569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.606550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.606850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.606876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.612006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.612320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.612347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.617343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.617678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.617704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.622780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.623072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.623100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.628199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.628505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.628532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.633583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.633874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.633902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.639017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.639316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.639343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.644743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.645047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.645073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.650116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.650443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.650466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.655467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.655776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.655798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.661043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.661335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.661370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.666400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.666701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.666728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.671817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.672124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.672151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.677222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.440 [2024-07-15 17:07:33.677540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.440 [2024-07-15 17:07:33.677567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.440 [2024-07-15 17:07:33.682561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.441 [2024-07-15 17:07:33.682869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.441 [2024-07-15 17:07:33.682895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.441 [2024-07-15 17:07:33.687966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.441 [2024-07-15 17:07:33.688282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.441 [2024-07-15 17:07:33.688309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.441 [2024-07-15 17:07:33.693513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.441 [2024-07-15 17:07:33.693806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.441 [2024-07-15 17:07:33.693833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.441 [2024-07-15 17:07:33.698915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.441 [2024-07-15 17:07:33.699211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.441 [2024-07-15 17:07:33.699239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.441 [2024-07-15 17:07:33.704181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.441 [2024-07-15 17:07:33.704512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.441 [2024-07-15 17:07:33.704540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.441 [2024-07-15 17:07:33.709605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.441 [2024-07-15 17:07:33.709922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.441 [2024-07-15 17:07:33.709949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.441 [2024-07-15 17:07:33.715123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.441 [2024-07-15 17:07:33.715440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.441 [2024-07-15 17:07:33.715468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.441 [2024-07-15 17:07:33.720465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.441 [2024-07-15 17:07:33.720758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.441 [2024-07-15 17:07:33.720785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.441 [2024-07-15 17:07:33.725918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.441 [2024-07-15 17:07:33.726227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.441 [2024-07-15 17:07:33.726255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.441 [2024-07-15 17:07:33.731338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.441 [2024-07-15 17:07:33.731664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.441 [2024-07-15 17:07:33.731691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.699 [2024-07-15 17:07:33.736711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.699 [2024-07-15 17:07:33.737003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.699 [2024-07-15 17:07:33.737031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.699 [2024-07-15 17:07:33.742166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.699 [2024-07-15 17:07:33.742512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.699 [2024-07-15 17:07:33.742539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.699 [2024-07-15 17:07:33.747613] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.699 [2024-07-15 17:07:33.747910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.699 [2024-07-15 17:07:33.747938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.699 [2024-07-15 17:07:33.753050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.699 [2024-07-15 17:07:33.753361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.699 [2024-07-15 17:07:33.753398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.699 [2024-07-15 17:07:33.758785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.699 [2024-07-15 17:07:33.759079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.699 [2024-07-15 17:07:33.759106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.699 [2024-07-15 17:07:33.764242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.699 [2024-07-15 17:07:33.764579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.699 [2024-07-15 17:07:33.764606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.699 [2024-07-15 17:07:33.769614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.699 [2024-07-15 17:07:33.769907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.699 [2024-07-15 17:07:33.769934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.699 [2024-07-15 17:07:33.774942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.699 [2024-07-15 17:07:33.775261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.699 [2024-07-15 17:07:33.775288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.780362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.780682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.780724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.785777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.786077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.786103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.791293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.791615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.791642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.796619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.796917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.796944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.802014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.802315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.802341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.807341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.807689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.807716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.812756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.813073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.813099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.818187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.818515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.818538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.823660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.823954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.823976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.829022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.829324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.829351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.834454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.834757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.834783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.839868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.840168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.840194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.845213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.845546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.845573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.850534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.850867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.850893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.855995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.856320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.856347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.861395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.861702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.861729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.866692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.867005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.867031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.872246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.872585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.872612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.877643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.877944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.877971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.883034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.883343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.883379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.888540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.888832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.888859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.893978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.894285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.894312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.899494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.899807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.899835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.904974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.905272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.905300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.910456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.910750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.910776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.915969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.916263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.916292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.921345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.921672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.921699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.926796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.927088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.927110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.700 [2024-07-15 17:07:33.932184] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.700 [2024-07-15 17:07:33.932504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.700 [2024-07-15 17:07:33.932526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.701 [2024-07-15 17:07:33.937641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.701 [2024-07-15 17:07:33.937935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.701 [2024-07-15 17:07:33.937962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.701 [2024-07-15 17:07:33.943039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.701 [2024-07-15 17:07:33.943338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.701 [2024-07-15 17:07:33.943374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.701 [2024-07-15 17:07:33.948575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.701 [2024-07-15 17:07:33.948883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.701 [2024-07-15 17:07:33.948910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.701 [2024-07-15 17:07:33.954055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.701 [2024-07-15 17:07:33.954372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.701 [2024-07-15 17:07:33.954410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.701 [2024-07-15 17:07:33.959565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.701 [2024-07-15 17:07:33.959858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.701 [2024-07-15 17:07:33.959885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.701 [2024-07-15 17:07:33.964971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.701 [2024-07-15 17:07:33.965266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.701 [2024-07-15 17:07:33.965293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.701 [2024-07-15 17:07:33.970278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.701 [2024-07-15 17:07:33.970598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.701 [2024-07-15 17:07:33.970625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.701 [2024-07-15 17:07:33.975649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.701 [2024-07-15 17:07:33.975951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.701 [2024-07-15 17:07:33.975977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.701 [2024-07-15 17:07:33.981034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.701 [2024-07-15 17:07:33.981332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.701 [2024-07-15 17:07:33.981368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.701 [2024-07-15 17:07:33.986441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.701 [2024-07-15 17:07:33.986737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.701 [2024-07-15 17:07:33.986763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.701 [2024-07-15 17:07:33.991810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.701 [2024-07-15 17:07:33.992105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.701 [2024-07-15 17:07:33.992131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:33.997226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:33.997533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:33.997560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.002662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.002969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.002996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.008118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.008439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.008465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.013582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.013877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.013899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.019002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.019299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.019326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.024489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.024793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.024822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.029910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.030202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.030236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.035320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.035639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.035666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.040721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.041016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.041044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.046094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.046405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.046432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.051462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.051765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.051792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.056848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.057143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.057170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.062236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.062544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.062566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.067647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.067940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.067967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.073016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.073311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.073339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.078488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.078781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.078808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.083887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.084184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.084211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.089262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.089565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.089592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.094649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.094945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.094972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.100071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.100377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.959 [2024-07-15 17:07:34.100417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.959 [2024-07-15 17:07:34.105530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.959 [2024-07-15 17:07:34.105855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.105882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.110965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.111272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.111299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.116504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.116825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.116851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.121958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.122274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.122300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.127422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.127753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.127780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.133065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.133383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.133419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.138406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.138708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.138735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.143880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.144199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.144226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.149294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.149611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.149637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.154758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.155050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.155077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.160242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.160560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.160587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.165653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.165953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.165979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.171036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.171333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.171367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.176518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.176847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.176873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.181870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.182170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.182196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.187481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.187790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.187817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.192993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.193288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.193315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.198422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.198715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.198741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.203990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.204287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.204314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.209439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.209744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.209771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.214897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.215198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.215219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.220507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.220830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.220858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.226708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.227050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.227078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.232315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.232663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.232684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.237828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.238128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.238155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.243532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.243830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.243857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.249157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.249473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.249500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.960 [2024-07-15 17:07:34.254691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:43.960 [2024-07-15 17:07:34.254999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.960 [2024-07-15 17:07:34.255025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.219 [2024-07-15 17:07:34.260387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.219 [2024-07-15 17:07:34.260684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-07-15 17:07:34.260710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.219 [2024-07-15 17:07:34.265783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.219 [2024-07-15 17:07:34.266080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.219 [2024-07-15 17:07:34.266107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.219 [2024-07-15 17:07:34.271190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.219 [2024-07-15 17:07:34.271495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.271530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.276641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.276933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.276960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.282124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.282440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.282467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.287633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.287926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.287953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.293127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.293447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.293474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.298504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.298808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.298836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.303905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.304201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.304223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.309382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.309691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.309717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.314815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.315129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.315156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.320301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.320612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.320639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.325727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.326033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.326061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.331218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.331535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.331562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.336645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.336936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.336962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.342088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.342407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.342434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.347458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.347774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.347809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.352993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.353307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.353334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.358471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.358765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.358792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.363840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.364133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.364161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.369277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.369603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.369630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.374665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.374957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.374984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.380104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.380425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.380452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.385542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.385836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.385863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.390933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.391225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.391252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.396305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.396609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.396642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.401701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.401998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.402025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.407082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.407379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.407414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.412587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.412904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.412930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.418327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.418684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.418712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.423779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.424091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.424119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.220 [2024-07-15 17:07:34.429332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.220 [2024-07-15 17:07:34.429663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.220 [2024-07-15 17:07:34.429685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.434813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.435114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.435142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.440347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.440688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.440717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.445842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.446138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.446166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.451529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.451840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.451877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.457188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.457533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.457562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.462674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.462976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.463003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.468334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.468644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.468672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.473746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.474045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.474074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.479321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.479673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.479703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.484864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.485163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.485206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.490427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.490734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.490762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.495977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.496277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.496305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.501480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.501776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.501814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.507129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.507459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.507488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.221 [2024-07-15 17:07:34.512712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.221 [2024-07-15 17:07:34.513014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.221 [2024-07-15 17:07:34.513043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.518344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.518676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.518718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.524028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.524323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.524351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.529615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.529914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.529944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.535102] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.535413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.535442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.540683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.541014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.541041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.546239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.546575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.546603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.551823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.552160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.552187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.557345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.557689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.557718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.562717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.563012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.563040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.568147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.568483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.568510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.573735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.574036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.574064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.579176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.579485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.579518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.584591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.584909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.584932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.481 [2024-07-15 17:07:34.590094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.481 [2024-07-15 17:07:34.590410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.481 [2024-07-15 17:07:34.590439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.595562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.595869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.595897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.600996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.601292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.601331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.606509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.606805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.606833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.611921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.612217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.612245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.617381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.617691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.617719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.622909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.623205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.623233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.628329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.628653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.628681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.633848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.634167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.634195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.639333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.639690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.639718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.644853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.645153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.645181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.650430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.650727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.650756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.655996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.656314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.656342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.661506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.661810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.661838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.666964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.667267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.667295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.672651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.672968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.672996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.678087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.678403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.678447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.683447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.683783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.683811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.688961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.689298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.689326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.694675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.694973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.695001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.700079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.700395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.700433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.705495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.705799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.705826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.710910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.711213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.711240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.716396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.716705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.716732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.721941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.722243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.722270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.727622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.727927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.727955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.733258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.733583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.733611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.738770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.739068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.739097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.744132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.744465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.744493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.482 [2024-07-15 17:07:34.749523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.482 [2024-07-15 17:07:34.749825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.482 [2024-07-15 17:07:34.749852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.483 [2024-07-15 17:07:34.754982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.483 [2024-07-15 17:07:34.755283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.483 [2024-07-15 17:07:34.755311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.483 [2024-07-15 17:07:34.760342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.483 [2024-07-15 17:07:34.760679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.483 [2024-07-15 17:07:34.760706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.483 [2024-07-15 17:07:34.765787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.483 [2024-07-15 17:07:34.766101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.483 [2024-07-15 17:07:34.766129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.483 [2024-07-15 17:07:34.771146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.483 [2024-07-15 17:07:34.771469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.483 [2024-07-15 17:07:34.771497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.483 [2024-07-15 17:07:34.776734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.483 [2024-07-15 17:07:34.777033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.483 [2024-07-15 17:07:34.777062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.741 [2024-07-15 17:07:34.782217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.741 [2024-07-15 17:07:34.782525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.741 [2024-07-15 17:07:34.782553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.741 [2024-07-15 17:07:34.787745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.741 [2024-07-15 17:07:34.788042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.741 [2024-07-15 17:07:34.788071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.741 [2024-07-15 17:07:34.793171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.741 [2024-07-15 17:07:34.793498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.741 [2024-07-15 17:07:34.793525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.741 [2024-07-15 17:07:34.798651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.741 [2024-07-15 17:07:34.798951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.741 [2024-07-15 17:07:34.798979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.741 [2024-07-15 17:07:34.804152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13ea500) with pdu=0x2000190fef90 00:18:44.741 [2024-07-15 17:07:34.804478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.741 [2024-07-15 17:07:34.804506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.741 00:18:44.741 Latency(us) 00:18:44.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.741 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:44.741 nvme0n1 : 2.00 5678.93 709.87 0.00 0.00 2811.35 2442.71 6166.34 00:18:44.741 =================================================================================================================== 00:18:44.741 Total : 5678.93 709.87 0.00 0.00 2811.35 2442.71 6166.34 00:18:44.741 0 00:18:44.741 17:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:44.741 17:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:44.741 17:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:44.741 17:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:44.741 | .driver_specific 00:18:44.741 | .nvme_error 00:18:44.741 | .status_code 00:18:44.741 | .command_transient_transport_error' 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 366 > 0 )) 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80584 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80584 ']' 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80584 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80584 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:45.000 killing process with pid 80584 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80584' 00:18:45.000 Received shutdown signal, test time was about 2.000000 seconds 00:18:45.000 00:18:45.000 Latency(us) 00:18:45.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.000 =================================================================================================================== 00:18:45.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80584 00:18:45.000 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80584 00:18:45.258 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80371 00:18:45.258 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80371 ']' 00:18:45.258 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80371 00:18:45.258 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:45.258 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.258 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80371 00:18:45.258 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:45.258 killing process with pid 80371 00:18:45.258 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:45.258 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80371' 00:18:45.258 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80371 00:18:45.258 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80371 00:18:45.517 00:18:45.517 real 0m18.983s 00:18:45.517 user 0m37.155s 00:18:45.517 sys 0m4.763s 00:18:45.517 ************************************ 00:18:45.517 END TEST nvmf_digest_error 00:18:45.517 ************************************ 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.517 rmmod nvme_tcp 00:18:45.517 rmmod nvme_fabrics 00:18:45.517 rmmod nvme_keyring 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80371 ']' 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80371 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80371 ']' 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80371 00:18:45.517 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80371) - No such process 00:18:45.517 Process with pid 80371 is not found 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80371 is not found' 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:45.517 00:18:45.517 real 0m39.036s 00:18:45.517 user 1m15.249s 00:18:45.517 sys 0m9.895s 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:45.517 17:07:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:45.517 ************************************ 00:18:45.517 END TEST nvmf_digest 00:18:45.517 ************************************ 00:18:45.777 17:07:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:45.777 17:07:35 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:18:45.777 17:07:35 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:18:45.777 17:07:35 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:45.777 17:07:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:45.777 17:07:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.777 17:07:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.777 ************************************ 00:18:45.777 START TEST nvmf_host_multipath 00:18:45.777 ************************************ 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:45.777 * Looking for test storage... 00:18:45.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.777 17:07:35 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:45.778 Cannot find device "nvmf_tgt_br" 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:45.778 Cannot find device "nvmf_tgt_br2" 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:45.778 Cannot find device "nvmf_tgt_br" 00:18:45.778 17:07:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:45.778 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:45.778 Cannot find device "nvmf_tgt_br2" 00:18:45.778 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:45.778 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:45.778 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:46.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:46.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:46.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:18:46.038 00:18:46.038 --- 10.0.0.2 ping statistics --- 00:18:46.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.038 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:46.038 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:46.038 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:18:46.038 00:18:46.038 --- 10.0.0.3 ping statistics --- 00:18:46.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.038 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:46.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:18:46.038 00:18:46.038 --- 10.0.0.1 ping statistics --- 00:18:46.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.038 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80850 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80850 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80850 ']' 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.038 17:07:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:46.297 [2024-07-15 17:07:36.345210] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:18:46.297 [2024-07-15 17:07:36.345824] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.297 [2024-07-15 17:07:36.483815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:46.557 [2024-07-15 17:07:36.615309] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.557 [2024-07-15 17:07:36.615408] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.557 [2024-07-15 17:07:36.615423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.557 [2024-07-15 17:07:36.615434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.557 [2024-07-15 17:07:36.615443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.557 [2024-07-15 17:07:36.615587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.557 [2024-07-15 17:07:36.615846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.557 [2024-07-15 17:07:36.674461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:47.124 17:07:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.124 17:07:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:47.124 17:07:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:47.124 17:07:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:47.124 17:07:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:47.124 17:07:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.124 17:07:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80850 00:18:47.124 17:07:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:47.382 [2024-07-15 17:07:37.563120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.382 17:07:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:47.640 Malloc0 00:18:47.640 17:07:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:47.899 17:07:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.158 17:07:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.416 [2024-07-15 17:07:38.534337] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.417 17:07:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:48.674 [2024-07-15 17:07:38.750481] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:48.674 17:07:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80903 00:18:48.674 17:07:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:48.674 17:07:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80903 /var/tmp/bdevperf.sock 00:18:48.674 17:07:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:48.674 17:07:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80903 ']' 00:18:48.674 17:07:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.674 17:07:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.674 17:07:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.674 17:07:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.674 17:07:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:49.649 17:07:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.649 17:07:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:49.649 17:07:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:49.908 17:07:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:50.166 Nvme0n1 00:18:50.166 17:07:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:50.424 Nvme0n1 00:18:50.424 17:07:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:50.424 17:07:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:51.376 17:07:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:51.376 17:07:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:51.634 17:07:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:51.893 17:07:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:51.893 17:07:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80944 00:18:51.893 17:07:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:51.893 17:07:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80850 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:58.455 Attaching 4 probes... 00:18:58.455 @path[10.0.0.2, 4421]: 17560 00:18:58.455 @path[10.0.0.2, 4421]: 18084 00:18:58.455 @path[10.0.0.2, 4421]: 17152 00:18:58.455 @path[10.0.0.2, 4421]: 17949 00:18:58.455 @path[10.0.0.2, 4421]: 18168 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80944 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:58.455 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:58.713 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:58.713 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81062 00:18:58.713 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80850 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:58.713 17:07:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:05.282 17:07:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:05.282 17:07:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:05.282 Attaching 4 probes... 00:19:05.282 @path[10.0.0.2, 4420]: 17936 00:19:05.282 @path[10.0.0.2, 4420]: 18299 00:19:05.282 @path[10.0.0.2, 4420]: 18400 00:19:05.282 @path[10.0.0.2, 4420]: 18359 00:19:05.282 @path[10.0.0.2, 4420]: 18460 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81062 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:05.282 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:05.539 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:05.539 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81178 00:19:05.539 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80850 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:05.539 17:07:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:12.098 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:12.098 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:12.098 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:12.098 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:12.098 Attaching 4 probes... 00:19:12.098 @path[10.0.0.2, 4421]: 13735 00:19:12.098 @path[10.0.0.2, 4421]: 18116 00:19:12.098 @path[10.0.0.2, 4421]: 18122 00:19:12.098 @path[10.0.0.2, 4421]: 18117 00:19:12.098 @path[10.0.0.2, 4421]: 17897 00:19:12.098 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:12.098 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:12.098 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:12.098 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:12.099 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:12.099 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:12.099 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81178 00:19:12.099 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:12.099 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:12.099 17:08:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:12.099 17:08:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:12.099 17:08:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:12.099 17:08:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81287 00:19:12.099 17:08:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80850 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:12.099 17:08:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:18.659 Attaching 4 probes... 00:19:18.659 00:19:18.659 00:19:18.659 00:19:18.659 00:19:18.659 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81287 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:18.659 17:08:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:18.918 17:08:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:18.918 17:08:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80850 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:18.918 17:08:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81404 00:19:18.918 17:08:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:25.478 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:25.478 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:25.478 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:25.479 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:25.479 Attaching 4 probes... 00:19:25.479 @path[10.0.0.2, 4421]: 17237 00:19:25.479 @path[10.0.0.2, 4421]: 17793 00:19:25.479 @path[10.0.0.2, 4421]: 17702 00:19:25.479 @path[10.0.0.2, 4421]: 17663 00:19:25.479 @path[10.0.0.2, 4421]: 17680 00:19:25.479 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:25.479 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:25.479 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:25.479 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:25.479 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:25.479 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:25.479 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81404 00:19:25.479 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:25.479 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:25.479 17:08:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:26.415 17:08:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:26.415 17:08:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80850 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:26.415 17:08:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81522 00:19:26.415 17:08:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:33.042 Attaching 4 probes... 00:19:33.042 @path[10.0.0.2, 4420]: 17073 00:19:33.042 @path[10.0.0.2, 4420]: 17643 00:19:33.042 @path[10.0.0.2, 4420]: 17651 00:19:33.042 @path[10.0.0.2, 4420]: 17664 00:19:33.042 @path[10.0.0.2, 4420]: 17632 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81522 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:33.042 17:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:33.042 [2024-07-15 17:08:23.104060] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:33.042 17:08:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:33.300 17:08:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:39.862 17:08:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:39.862 17:08:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81698 00:19:39.862 17:08:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80850 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:39.862 17:08:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:45.125 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:45.125 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:45.384 Attaching 4 probes... 00:19:45.384 @path[10.0.0.2, 4421]: 17164 00:19:45.384 @path[10.0.0.2, 4421]: 17268 00:19:45.384 @path[10.0.0.2, 4421]: 17633 00:19:45.384 @path[10.0.0.2, 4421]: 17641 00:19:45.384 @path[10.0.0.2, 4421]: 17659 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81698 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80903 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80903 ']' 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80903 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.384 17:08:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80903 00:19:45.703 killing process with pid 80903 00:19:45.703 17:08:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:45.703 17:08:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:45.703 17:08:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80903' 00:19:45.703 17:08:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80903 00:19:45.703 17:08:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80903 00:19:45.703 Connection closed with partial response: 00:19:45.703 00:19:45.703 00:19:45.703 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80903 00:19:45.703 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:45.703 [2024-07-15 17:07:38.823546] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:45.703 [2024-07-15 17:07:38.823658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80903 ] 00:19:45.703 [2024-07-15 17:07:38.959848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.703 [2024-07-15 17:07:39.102249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.703 [2024-07-15 17:07:39.156488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:45.703 Running I/O for 90 seconds... 00:19:45.703 [2024-07-15 17:07:48.812755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.703 [2024-07-15 17:07:48.812828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.812865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.703 [2024-07-15 17:07:48.812883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.812905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.703 [2024-07-15 17:07:48.812920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.812941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.703 [2024-07-15 17:07:48.812956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.812977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.703 [2024-07-15 17:07:48.812991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.703 [2024-07-15 17:07:48.813027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.703 [2024-07-15 17:07:48.813062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.703 [2024-07-15 17:07:48.813097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.703 [2024-07-15 17:07:48.813648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.703 [2024-07-15 17:07:48.813663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.813684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.704 [2024-07-15 17:07:48.813708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.813731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.704 [2024-07-15 17:07:48.813746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.813895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.813919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.813943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.813958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.813979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.813994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.704 [2024-07-15 17:07:48.814526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.704 [2024-07-15 17:07:48.814562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.704 [2024-07-15 17:07:48.814598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.704 [2024-07-15 17:07:48.814633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.704 [2024-07-15 17:07:48.814669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.704 [2024-07-15 17:07:48.814704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.704 [2024-07-15 17:07:48.814740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.704 [2024-07-15 17:07:48.814784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.814984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.814998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.815019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.815033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.815054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.815069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.815089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.815104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.815124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.815139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.815160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.815174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.815201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.815216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.815237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.815251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.815272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.815286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.815307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.815322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.704 [2024-07-15 17:07:48.815343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.704 [2024-07-15 17:07:48.815370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.815431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.815468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.815518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.815558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.815596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.815632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.815668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.815717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.815755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.815790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.815825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.815861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.815896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.815931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.815975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.815996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.705 [2024-07-15 17:07:48.816532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.816575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.816610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.816653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.816689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.816725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.816760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.816796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.705 [2024-07-15 17:07:48.816817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.705 [2024-07-15 17:07:48.816831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.816852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.816866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.816887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.816901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.816922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.816937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.816958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.816972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.816993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.817007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.817028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.817043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.817070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.817085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.818335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.818971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.818992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.819007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.819042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.819077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.819112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.819149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.819186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.819228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.819697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.819740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.706 [2024-07-15 17:07:48.819776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.819811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.819847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.819883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.706 [2024-07-15 17:07:48.819918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.706 [2024-07-15 17:07:48.819939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.819954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.819975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.819989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.820025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.820060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.820096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.820144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.820180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.820216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.820251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.820286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.820322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.820370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.707 [2024-07-15 17:07:48.820963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.820984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.820998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.707 [2024-07-15 17:07:48.821020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.707 [2024-07-15 17:07:48.821041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.821078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.821114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.821150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.821185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.821220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.821254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.821977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.821998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.822012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.822032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.822046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.822067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.822081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.822102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.822116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.822137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.822150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.822171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.822185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.822205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.822219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.822240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.822254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.822275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.708 [2024-07-15 17:07:48.822289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.822310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.822324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.823007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.823034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.823071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.823088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.823110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.823124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.708 [2024-07-15 17:07:48.823145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.708 [2024-07-15 17:07:48.823159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.823660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.823695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.823730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.823765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.823800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.823834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.823869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.823904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.823938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.823959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.823980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.824016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.824051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.824086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.824122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.824158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.824194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.709 [2024-07-15 17:07:48.824229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.824265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.824300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.824334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.824382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.824419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.824463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.824499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.709 [2024-07-15 17:07:48.824541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.709 [2024-07-15 17:07:48.824562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.824970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.824990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.825004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.825039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.825074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.825108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.825144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.825178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.710 [2024-07-15 17:07:48.825768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.825804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.825845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.825881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.710 [2024-07-15 17:07:48.825902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.710 [2024-07-15 17:07:48.825917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.825939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.825954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.825978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.826002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.826040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.826075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.826110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.826152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.826187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.826222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.826265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.826300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.826336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.826386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.711 [2024-07-15 17:07:48.826423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.711 [2024-07-15 17:07:48.826464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.711 [2024-07-15 17:07:48.826499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.711 [2024-07-15 17:07:48.826535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.711 [2024-07-15 17:07:48.826570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.711 [2024-07-15 17:07:48.826605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.711 [2024-07-15 17:07:48.826640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.826662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.711 [2024-07-15 17:07:48.826676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.828978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.828992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.829013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.829027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.829048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.829062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.829083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.829097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.711 [2024-07-15 17:07:48.829117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.711 [2024-07-15 17:07:48.829131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.829200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.829236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.829270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.829305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.829346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.829400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.829436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.829471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.829976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.829997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.830011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.830031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.830046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.830066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.830080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.830101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.712 [2024-07-15 17:07:48.830115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.830136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.830150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.830171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.830185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.830206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.830220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.830248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.712 [2024-07-15 17:07:48.830263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.712 [2024-07-15 17:07:48.830284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.830710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.830731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.830746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.831972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.831993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.832007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.832043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.832078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.832112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.832147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.832188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.832225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.713 [2024-07-15 17:07:48.832261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.832296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.832331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.832379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.713 [2024-07-15 17:07:48.832417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.713 [2024-07-15 17:07:48.832438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.832848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.832883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.832918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.832953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.832974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.832988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.833503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.833538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.833584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.833621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.833656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.833691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.833726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.714 [2024-07-15 17:07:48.833761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.833974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.714 [2024-07-15 17:07:48.833989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.714 [2024-07-15 17:07:48.834016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.715 [2024-07-15 17:07:48.834559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.715 [2024-07-15 17:07:48.834594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.715 [2024-07-15 17:07:48.834631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.715 [2024-07-15 17:07:48.834666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.715 [2024-07-15 17:07:48.834701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.715 [2024-07-15 17:07:48.834736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.715 [2024-07-15 17:07:48.834770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.715 [2024-07-15 17:07:48.834805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.834979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.834993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.835013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.835028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.835048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.835063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.837163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.837226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.837264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.837300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.837336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.837391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.837428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.837463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.837510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.837549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.715 [2024-07-15 17:07:48.837584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.715 [2024-07-15 17:07:48.837619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.715 [2024-07-15 17:07:48.837655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.715 [2024-07-15 17:07:48.837676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.837690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.837711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.837725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.837746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.837760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.837781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.837794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.837815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.837830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.837850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.837865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.837885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.837900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.837921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.837942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.837964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.837978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.837999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.838013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.838034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.838048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.838069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.838083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.838104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.838118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.838139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:48.838153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:48.838174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.716 [2024-07-15 17:07:48.838189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.343880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.716 [2024-07-15 17:07:55.343954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.716 [2024-07-15 17:07:55.344034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.716 [2024-07-15 17:07:55.344073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.716 [2024-07-15 17:07:55.344109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.716 [2024-07-15 17:07:55.344144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.716 [2024-07-15 17:07:55.344206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.716 [2024-07-15 17:07:55.344242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.716 [2024-07-15 17:07:55.344277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.716 [2024-07-15 17:07:55.344802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.716 [2024-07-15 17:07:55.344823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.344838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.344860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.344874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.344901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.344917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.344940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.344954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.344975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.344989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.345025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.345061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.345106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.345142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.345179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.717 [2024-07-15 17:07:55.345780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.345822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.345859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.345895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.345931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.345966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.345988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.717 [2024-07-15 17:07:55.346527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.717 [2024-07-15 17:07:55.346542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.346978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.346999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.347014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.347050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.347085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.347121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.718 [2024-07-15 17:07:55.347721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.347757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.347792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.347828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.347870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.347908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.347944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.347965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.347980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.348000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.348015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.348036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.348051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.348080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.348095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.348116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.348131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.348152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.348167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.348193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.348207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.718 [2024-07-15 17:07:55.348229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.718 [2024-07-15 17:07:55.348243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.348947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:07:55.348975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:07:55.349027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:07:55.349670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:07:55.349687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:08:02.317232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:08:02.317301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:08:02.317338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:08:02.317386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:08:02.317424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:08:02.317459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:08:02.317493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:08:02.317527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.317971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.317992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.719 [2024-07-15 17:08:02.318408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:08:02.318465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:08:02.318502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.719 [2024-07-15 17:08:02.318537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.719 [2024-07-15 17:08:02.318557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.318571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.318616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.318662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.318697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.318732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.318766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.318801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.318836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.318870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.318905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.318940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.318975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.318996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.319329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.319380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.319417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.319451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.319493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.319543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.319578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.319614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.720 [2024-07-15 17:08:02.319897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.319939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.319975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.319996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.720 [2024-07-15 17:08:02.320439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.720 [2024-07-15 17:08:02.320460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.320474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.320508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.320543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.320578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.320614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.320648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.320682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.320716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.320751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.320793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.320827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.320869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.320903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.320938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.320972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.320992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.321005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.321026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.321040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.321060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.321074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.321094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.321108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.321129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.321142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.321163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.321177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.321197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.321211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.321232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.321246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.321266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.321285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.321307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.321321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.322669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.322697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.322726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.322742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.322766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.322780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.322800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.322814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.322835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.322849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.322869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.322883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.322904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.322918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.322938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.322952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.322972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.322986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.323587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.323985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.721 [2024-07-15 17:08:02.324009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.324034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.324049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.324070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.324084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.324105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.324119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.324140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.324154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.324174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.324188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.324211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.324227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.324249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.324264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.324287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.324302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.324323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.324339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.721 [2024-07-15 17:08:02.324374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.721 [2024-07-15 17:08:02.324393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.324928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.324964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.324985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.325000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.325036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.325071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.325107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.325143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.325186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.325241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.325855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.325892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.325928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.325964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.325986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.326001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.326037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.326073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.326109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.326170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.326209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.326246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.326281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.326327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.326377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.326416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.326453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.722 [2024-07-15 17:08:02.326490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.326533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.326572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.326607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.326644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.722 [2024-07-15 17:08:02.326681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.722 [2024-07-15 17:08:02.326702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.326717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.326739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.326754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.326787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.326803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.326824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.326839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.326860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.326875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.326897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.326912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.326934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.326949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.326970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.326985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.327021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.327057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.327094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.327135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.327172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.327208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.327251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.327289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.327325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.327376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.327425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.327463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.327500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.327559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.327595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.327630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.327665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.327699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.327742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.327782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.327804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.327817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.336806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.336848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.336880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.336899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.336926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.336944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.336970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.336988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.337031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.337075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.337119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.337162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.337210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.337253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.337315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.337377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.337424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.723 [2024-07-15 17:08:02.337468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.337513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.337556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.337600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.337642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.337685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.337728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.337772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.337825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.337882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.723 [2024-07-15 17:08:02.337908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.723 [2024-07-15 17:08:02.337925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.337951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.337968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.337995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.338012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.338038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.338055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.338081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.338098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.340259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.340314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.340958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.340984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.341457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.341501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.341544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.341587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.341631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.341691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.341735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.341778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.341976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.341994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.342037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.342080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.342123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.342166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.342222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.342266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.342308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.342352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.342412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.342456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.724 [2024-07-15 17:08:02.342508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.342551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.342594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.342638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.342681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.724 [2024-07-15 17:08:02.342724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.724 [2024-07-15 17:08:02.342759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.342777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.342803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.342821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.342847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.342864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.342891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.342908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.342934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.342951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.342977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.342994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.343037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.343081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.343124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.343167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.343210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.343969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.343986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.344029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.344072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.344115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.344159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.344202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.344245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.344289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.344331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.344962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.344995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.345013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.345039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.345056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.345083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.345100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.345126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.345143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.345169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.345186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.345212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.345229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.345255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.345272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.345298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.345315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.345341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.345370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.345398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.725 [2024-07-15 17:08:02.345416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.725 [2024-07-15 17:08:02.345442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.725 [2024-07-15 17:08:02.345460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.726 [2024-07-15 17:08:02.345485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.726 [2024-07-15 17:08:02.345503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.726 [2024-07-15 17:08:02.345529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.726 [2024-07-15 17:08:02.345553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.726 [2024-07-15 17:08:02.345580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.726 [2024-07-15 17:08:02.345597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.726 [2024-07-15 17:08:02.345624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.726 [2024-07-15 17:08:02.345641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.726 [2024-07-15 17:08:02.345672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.726 [2024-07-15 17:08:02.345691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.726 [2024-07-15 17:08:02.345717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.726 [2024-07-15 17:08:02.345735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.726 [2024-07-15 17:08:02.345761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.345778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.345804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.345821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.345847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.345864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.345891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.345908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.345934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.345951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.345977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.345995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.348050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.348103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.348140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.348984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.348998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.349032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.349066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.349100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.349134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.349189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.349225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.349260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.727 [2024-07-15 17:08:02.349294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.727 [2024-07-15 17:08:02.349784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.727 [2024-07-15 17:08:02.349799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.349819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.349832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.349853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.349872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.349894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.349908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.349928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.349942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.349962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.349976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.349997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.350199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.350234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.350268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.350302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.350346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.350396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.350431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.350465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.350979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.350993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.351027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.351061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.351095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.351129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.351163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.351220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.351257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.351291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.728 [2024-07-15 17:08:02.351325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.728 [2024-07-15 17:08:02.351950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:45.728 [2024-07-15 17:08:02.351970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:02.351983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:02.352017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:02.352052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:02.352091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:02.352127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:02.352161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:02.352195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.352622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.352636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:02.353108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:02.353135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.540939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.541015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.541094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.541132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.541167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.541202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.541237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.541271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.541306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.541916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.541968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.541989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.542005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.542018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.542033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.542046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.542061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.542074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.542089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.542102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.542117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.542130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.542145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.542158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.542172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.729 [2024-07-15 17:08:15.542186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.542201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.542214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.542238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.542252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.729 [2024-07-15 17:08:15.542267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.729 [2024-07-15 17:08:15.542280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.542310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.542338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.542381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.542410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.542438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.542982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.542997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.543010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.543038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.543065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.543094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.543123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.730 [2024-07-15 17:08:15.543839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.543866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.543894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.543922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.543950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.730 [2024-07-15 17:08:15.543965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.730 [2024-07-15 17:08:15.543978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.543993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.731 [2024-07-15 17:08:15.544006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.731 [2024-07-15 17:08:15.544034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.731 [2024-07-15 17:08:15.544078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.731 [2024-07-15 17:08:15.544113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.731 [2024-07-15 17:08:15.544142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.731 [2024-07-15 17:08:15.544170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.731 [2024-07-15 17:08:15.544198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.731 [2024-07-15 17:08:15.544226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.731 [2024-07-15 17:08:15.544254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.731 [2024-07-15 17:08:15.544282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13536d0 is same with the state(5) to be set 00:19:45.731 [2024-07-15 17:08:15.544312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79488 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79496 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79504 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79512 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79520 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79528 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79536 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79544 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.544960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.544973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.544982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.544992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79552 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.545004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.545027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.545037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79560 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.545049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.545071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.545081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79568 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.545094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.545116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.545125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79576 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.545137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.545161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.545176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79584 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.545189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.545212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.545231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79592 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.545244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.545267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.545277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79600 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.545290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.545312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.545322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79608 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.545335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.545368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.545379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79616 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.545392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.545415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.545425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79624 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.545438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.731 [2024-07-15 17:08:15.545460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.731 [2024-07-15 17:08:15.545470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79632 len:8 PRP1 0x0 PRP2 0x0 00:19:45.731 [2024-07-15 17:08:15.545483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545541] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13536d0 was disconnected and freed. reset controller. 00:19:45.731 [2024-07-15 17:08:15.545661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.731 [2024-07-15 17:08:15.545686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.731 [2024-07-15 17:08:15.545728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.731 [2024-07-15 17:08:15.545756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.731 [2024-07-15 17:08:15.545783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.731 [2024-07-15 17:08:15.545816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.731 [2024-07-15 17:08:15.545836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cd100 is same with the state(5) to be set 00:19:45.731 [2024-07-15 17:08:15.546973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.731 [2024-07-15 17:08:15.547012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cd100 (9): Bad file descriptor 00:19:45.731 [2024-07-15 17:08:15.547436] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.731 [2024-07-15 17:08:15.547470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cd100 with addr=10.0.0.2, port=4421 00:19:45.731 [2024-07-15 17:08:15.547489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cd100 is same with the state(5) to be set 00:19:45.731 [2024-07-15 17:08:15.547567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cd100 (9): Bad file descriptor 00:19:45.731 [2024-07-15 17:08:15.547603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:45.732 [2024-07-15 17:08:15.547619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:45.732 [2024-07-15 17:08:15.547632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:45.732 [2024-07-15 17:08:15.547663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:45.732 [2024-07-15 17:08:15.547678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.732 [2024-07-15 17:08:25.612890] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:45.732 Received shutdown signal, test time was about 54.990956 seconds 00:19:45.732 00:19:45.732 Latency(us) 00:19:45.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.732 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:45.732 Verification LBA range: start 0x0 length 0x4000 00:19:45.732 Nvme0n1 : 54.99 7568.68 29.57 0.00 0.00 16879.31 837.82 7076934.75 00:19:45.732 =================================================================================================================== 00:19:45.732 Total : 7568.68 29.57 0.00 0.00 16879.31 837.82 7076934.75 00:19:45.732 17:08:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.990 rmmod nvme_tcp 00:19:45.990 rmmod nvme_fabrics 00:19:45.990 rmmod nvme_keyring 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80850 ']' 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80850 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80850 ']' 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80850 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80850 00:19:45.990 killing process with pid 80850 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80850' 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80850 00:19:45.990 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80850 00:19:46.263 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:46.263 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:46.263 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:46.263 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.263 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:46.263 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.263 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.263 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.263 17:08:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:46.263 00:19:46.263 real 1m0.724s 00:19:46.263 user 2m48.176s 00:19:46.263 sys 0m18.060s 00:19:46.263 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:46.263 ************************************ 00:19:46.263 END TEST nvmf_host_multipath 00:19:46.263 ************************************ 00:19:46.263 17:08:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:46.523 17:08:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:46.523 17:08:36 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:46.523 17:08:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:46.523 17:08:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.523 17:08:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:46.523 ************************************ 00:19:46.523 START TEST nvmf_timeout 00:19:46.523 ************************************ 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:46.523 * Looking for test storage... 00:19:46.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.523 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:46.524 Cannot find device "nvmf_tgt_br" 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:46.524 Cannot find device "nvmf_tgt_br2" 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:46.524 Cannot find device "nvmf_tgt_br" 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:46.524 Cannot find device "nvmf_tgt_br2" 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:46.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:46.524 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:46.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:46.782 17:08:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:46.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:19:46.782 00:19:46.782 --- 10.0.0.2 ping statistics --- 00:19:46.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.782 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:46.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:46.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:19:46.782 00:19:46.782 --- 10.0.0.3 ping statistics --- 00:19:46.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.782 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:46.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:46.782 00:19:46.782 --- 10.0.0.1 ping statistics --- 00:19:46.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.782 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82001 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82001 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82001 ']' 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.782 17:08:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:47.040 [2024-07-15 17:08:37.099491] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:47.040 [2024-07-15 17:08:37.099600] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.040 [2024-07-15 17:08:37.242579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:47.298 [2024-07-15 17:08:37.364638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.298 [2024-07-15 17:08:37.364697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.298 [2024-07-15 17:08:37.364711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.298 [2024-07-15 17:08:37.364721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.298 [2024-07-15 17:08:37.364731] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.298 [2024-07-15 17:08:37.364838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.298 [2024-07-15 17:08:37.365055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.298 [2024-07-15 17:08:37.425477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:47.864 17:08:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.865 17:08:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:47.865 17:08:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:47.865 17:08:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.865 17:08:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:47.865 17:08:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.865 17:08:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.865 17:08:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:48.121 [2024-07-15 17:08:38.368040] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.121 17:08:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:48.378 Malloc0 00:19:48.378 17:08:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:48.637 17:08:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:48.897 17:08:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.155 [2024-07-15 17:08:39.443288] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.414 17:08:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82055 00:19:49.414 17:08:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82055 /var/tmp/bdevperf.sock 00:19:49.414 17:08:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82055 ']' 00:19:49.414 17:08:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.414 17:08:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.414 17:08:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:49.414 17:08:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.414 17:08:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.414 17:08:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:49.414 [2024-07-15 17:08:39.520889] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:49.414 [2024-07-15 17:08:39.520985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82055 ] 00:19:49.414 [2024-07-15 17:08:39.656379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.672 [2024-07-15 17:08:39.783548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.672 [2024-07-15 17:08:39.837919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:50.238 17:08:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.238 17:08:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:50.238 17:08:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:50.496 17:08:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:50.753 NVMe0n1 00:19:50.753 17:08:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82078 00:19:50.753 17:08:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:50.753 17:08:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:51.012 Running I/O for 10 seconds... 00:19:51.997 17:08:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.997 [2024-07-15 17:08:42.181415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.181995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.997 [2024-07-15 17:08:42.182966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ce50 is same with the state(5) to be set 00:19:51.998 [2024-07-15 17:08:42.183054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.183983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.183992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.998 [2024-07-15 17:08:42.184809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.998 [2024-07-15 17:08:42.184819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.184830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.184844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.184855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.184865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.184876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.184885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.184896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.184905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.184917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.184926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.184937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.184947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.184958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.184967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.184978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.184987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.184999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.999 [2024-07-15 17:08:42.185800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.999 [2024-07-15 17:08:42.185820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15be4d0 is same with the state(5) to be set 00:19:51.999 [2024-07-15 17:08:42.185848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.999 [2024-07-15 17:08:42.185855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.999 [2024-07-15 17:08:42.185876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:19:51.999 [2024-07-15 17:08:42.185885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.999 [2024-07-15 17:08:42.185940] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15be4d0 was disconnected and freed. reset controller. 00:19:51.999 [2024-07-15 17:08:42.186185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.999 [2024-07-15 17:08:42.186261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1573d40 (9): Bad file descriptor 00:19:51.999 [2024-07-15 17:08:42.186389] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.999 [2024-07-15 17:08:42.186412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1573d40 with addr=10.0.0.2, port=4420 00:19:51.999 [2024-07-15 17:08:42.186423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1573d40 is same with the state(5) to be set 00:19:51.999 [2024-07-15 17:08:42.186442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1573d40 (9): Bad file descriptor 00:19:51.999 [2024-07-15 17:08:42.186458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:51.999 [2024-07-15 17:08:42.186468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:51.999 [2024-07-15 17:08:42.186479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.999 [2024-07-15 17:08:42.186499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.999 [2024-07-15 17:08:42.186510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.999 17:08:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:53.898 [2024-07-15 17:08:44.186773] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.898 [2024-07-15 17:08:44.186843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1573d40 with addr=10.0.0.2, port=4420 00:19:53.898 [2024-07-15 17:08:44.186860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1573d40 is same with the state(5) to be set 00:19:53.898 [2024-07-15 17:08:44.186889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1573d40 (9): Bad file descriptor 00:19:53.898 [2024-07-15 17:08:44.186923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.898 [2024-07-15 17:08:44.186936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:53.898 [2024-07-15 17:08:44.186947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.898 [2024-07-15 17:08:44.186974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.898 [2024-07-15 17:08:44.186986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:54.156 17:08:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:54.156 17:08:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:54.156 17:08:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:54.414 17:08:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:54.414 17:08:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:54.414 17:08:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:54.414 17:08:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:54.672 17:08:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:54.672 17:08:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:56.043 [2024-07-15 17:08:46.187215] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.043 [2024-07-15 17:08:46.187300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1573d40 with addr=10.0.0.2, port=4420 00:19:56.044 [2024-07-15 17:08:46.187318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1573d40 is same with the state(5) to be set 00:19:56.044 [2024-07-15 17:08:46.187347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1573d40 (9): Bad file descriptor 00:19:56.044 [2024-07-15 17:08:46.187368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.044 [2024-07-15 17:08:46.187392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:56.044 [2024-07-15 17:08:46.187404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.044 [2024-07-15 17:08:46.187432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.044 [2024-07-15 17:08:46.187443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.936 [2024-07-15 17:08:48.187565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.936 [2024-07-15 17:08:48.187633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.936 [2024-07-15 17:08:48.187645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:57.936 [2024-07-15 17:08:48.187656] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:57.936 [2024-07-15 17:08:48.187683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.316 00:19:59.316 Latency(us) 00:19:59.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.316 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.316 Verification LBA range: start 0x0 length 0x4000 00:19:59.316 NVMe0n1 : 8.12 949.67 3.71 15.75 0.00 132632.51 3664.06 7046430.72 00:19:59.316 =================================================================================================================== 00:19:59.316 Total : 949.67 3.71 15.75 0.00 132632.51 3664.06 7046430.72 00:19:59.316 0 00:19:59.574 17:08:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:59.574 17:08:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:59.574 17:08:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:59.832 17:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:59.832 17:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:59.832 17:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:59.832 17:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82078 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82055 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82055 ']' 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82055 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82055 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82055' 00:20:00.090 killing process with pid 82055 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82055 00:20:00.090 Received shutdown signal, test time was about 9.263418 seconds 00:20:00.090 00:20:00.090 Latency(us) 00:20:00.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.090 =================================================================================================================== 00:20:00.090 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.090 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82055 00:20:00.349 17:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.607 [2024-07-15 17:08:50.807477] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.607 17:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82200 00:20:00.607 17:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:00.607 17:08:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82200 /var/tmp/bdevperf.sock 00:20:00.607 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82200 ']' 00:20:00.607 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.607 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.607 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.607 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.607 17:08:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:00.608 [2024-07-15 17:08:50.877979] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:00.608 [2024-07-15 17:08:50.878078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82200 ] 00:20:00.866 [2024-07-15 17:08:51.012537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.866 [2024-07-15 17:08:51.124900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.124 [2024-07-15 17:08:51.178082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:01.690 17:08:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.690 17:08:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:01.690 17:08:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:01.948 17:08:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:02.206 NVMe0n1 00:20:02.206 17:08:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82219 00:20:02.206 17:08:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.206 17:08:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:02.206 Running I/O for 10 seconds... 00:20:03.154 17:08:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.422 [2024-07-15 17:08:53.668589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.422 [2024-07-15 17:08:53.668667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.422 [2024-07-15 17:08:53.668691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.422 [2024-07-15 17:08:53.668702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.422 [2024-07-15 17:08:53.668714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.422 [2024-07-15 17:08:53.668725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.422 [2024-07-15 17:08:53.668736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.422 [2024-07-15 17:08:53.668746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.422 [2024-07-15 17:08:53.668758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.422 [2024-07-15 17:08:53.668767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.422 [2024-07-15 17:08:53.668779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.422 [2024-07-15 17:08:53.668788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.668799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.668809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.668820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.668829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.668841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.668850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.668861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.668870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.668882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.668891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.668902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.668911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.668922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.668932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.668943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.668952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.668963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.668973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.668984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.668994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.669015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.669039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.669061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.669082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.669102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.669123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.669143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.669490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.669511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.423 [2024-07-15 17:08:53.669657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.423 [2024-07-15 17:08:53.669678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.423 [2024-07-15 17:08:53.669690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.669979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.669988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.424 [2024-07-15 17:08:53.670577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.424 [2024-07-15 17:08:53.670588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.670979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.670991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.425 [2024-07-15 17:08:53.671337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e4d0 is same with the state(5) to be set 00:20:03.425 [2024-07-15 17:08:53.671370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.425 [2024-07-15 17:08:53.671379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.425 [2024-07-15 17:08:53.671388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66512 len:8 PRP1 0x0 PRP2 0x0 00:20:03.425 [2024-07-15 17:08:53.671397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.425 [2024-07-15 17:08:53.671448] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d7e4d0 was disconnected and freed. reset controller. 00:20:03.425 [2024-07-15 17:08:53.671712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.425 [2024-07-15 17:08:53.671789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d33d40 (9): Bad file descriptor 00:20:03.425 [2024-07-15 17:08:53.671898] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.425 [2024-07-15 17:08:53.671919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d33d40 with addr=10.0.0.2, port=4420 00:20:03.425 [2024-07-15 17:08:53.671930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33d40 is same with the state(5) to be set 00:20:03.425 [2024-07-15 17:08:53.671948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d33d40 (9): Bad file descriptor 00:20:03.425 [2024-07-15 17:08:53.671964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:03.425 [2024-07-15 17:08:53.671974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:03.425 [2024-07-15 17:08:53.671985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:03.425 [2024-07-15 17:08:53.672005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:03.426 [2024-07-15 17:08:53.672016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.426 17:08:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:04.798 [2024-07-15 17:08:54.672169] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:04.798 [2024-07-15 17:08:54.672245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d33d40 with addr=10.0.0.2, port=4420 00:20:04.798 [2024-07-15 17:08:54.672262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33d40 is same with the state(5) to be set 00:20:04.798 [2024-07-15 17:08:54.672287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d33d40 (9): Bad file descriptor 00:20:04.798 [2024-07-15 17:08:54.672306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.798 [2024-07-15 17:08:54.672316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:04.798 [2024-07-15 17:08:54.672327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.798 [2024-07-15 17:08:54.672367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:04.798 [2024-07-15 17:08:54.672381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:04.798 17:08:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.798 [2024-07-15 17:08:54.924453] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.798 17:08:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82219 00:20:05.732 [2024-07-15 17:08:55.688258] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:12.299 00:20:12.299 Latency(us) 00:20:12.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.299 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.299 Verification LBA range: start 0x0 length 0x4000 00:20:12.299 NVMe0n1 : 10.01 6293.51 24.58 0.00 0.00 20290.20 1608.61 3019898.88 00:20:12.300 =================================================================================================================== 00:20:12.300 Total : 6293.51 24.58 0.00 0.00 20290.20 1608.61 3019898.88 00:20:12.300 0 00:20:12.300 17:09:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82325 00:20:12.300 17:09:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:12.300 17:09:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:12.558 Running I/O for 10 seconds... 00:20:13.521 17:09:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.521 [2024-07-15 17:09:03.732212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.521 [2024-07-15 17:09:03.732290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.521 [2024-07-15 17:09:03.732325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.521 [2024-07-15 17:09:03.732347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.521 [2024-07-15 17:09:03.732387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.521 [2024-07-15 17:09:03.732408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.521 [2024-07-15 17:09:03.732430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.521 [2024-07-15 17:09:03.732451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.521 [2024-07-15 17:09:03.732471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.521 [2024-07-15 17:09:03.732779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.521 [2024-07-15 17:09:03.732789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.732800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.732810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.732821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.732830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.732842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.732851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.732862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.732871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.732882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.732891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.732902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.732912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.732922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.732931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.732942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.732952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.732963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.732972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.732983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.732993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.522 [2024-07-15 17:09:03.733648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.522 [2024-07-15 17:09:03.733669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.522 [2024-07-15 17:09:03.733680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.733981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.733993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:13.523 [2024-07-15 17:09:03.734506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.523 [2024-07-15 17:09:03.734517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.523 [2024-07-15 17:09:03.734527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.524 [2024-07-15 17:09:03.734831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a180 is same with the state(5) to be set 00:20:13.524 [2024-07-15 17:09:03.734854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:13.524 [2024-07-15 17:09:03.734862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:13.524 [2024-07-15 17:09:03.734871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65744 len:8 PRP1 0x0 PRP2 0x0 00:20:13.524 [2024-07-15 17:09:03.734880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:13.524 [2024-07-15 17:09:03.734898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:13.524 [2024-07-15 17:09:03.734906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66136 len:8 PRP1 0x0 PRP2 0x0 00:20:13.524 [2024-07-15 17:09:03.734916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:13.524 [2024-07-15 17:09:03.734933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:13.524 [2024-07-15 17:09:03.734941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66144 len:8 PRP1 0x0 PRP2 0x0 00:20:13.524 [2024-07-15 17:09:03.734950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:13.524 [2024-07-15 17:09:03.734967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:13.524 [2024-07-15 17:09:03.734975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66152 len:8 PRP1 0x0 PRP2 0x0 00:20:13.524 [2024-07-15 17:09:03.734989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.734999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:13.524 [2024-07-15 17:09:03.735006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:13.524 [2024-07-15 17:09:03.735015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66160 len:8 PRP1 0x0 PRP2 0x0 00:20:13.524 [2024-07-15 17:09:03.735024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.735033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:13.524 [2024-07-15 17:09:03.735041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:13.524 [2024-07-15 17:09:03.735049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66168 len:8 PRP1 0x0 PRP2 0x0 00:20:13.524 [2024-07-15 17:09:03.735058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.735067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:13.524 [2024-07-15 17:09:03.735074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:13.524 [2024-07-15 17:09:03.735082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66176 len:8 PRP1 0x0 PRP2 0x0 00:20:13.524 [2024-07-15 17:09:03.735091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.735105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:13.524 [2024-07-15 17:09:03.735112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:13.524 [2024-07-15 17:09:03.735120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66184 len:8 PRP1 0x0 PRP2 0x0 00:20:13.524 [2024-07-15 17:09:03.735129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.735139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:13.524 [2024-07-15 17:09:03.735146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:13.524 [2024-07-15 17:09:03.735154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66192 len:8 PRP1 0x0 PRP2 0x0 00:20:13.524 [2024-07-15 17:09:03.735163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.524 [2024-07-15 17:09:03.735216] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d7a180 was disconnected and freed. reset controller. 00:20:13.524 [2024-07-15 17:09:03.735463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:13.524 [2024-07-15 17:09:03.735557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d33d40 (9): Bad file descriptor 00:20:13.524 [2024-07-15 17:09:03.735661] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.524 [2024-07-15 17:09:03.735682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d33d40 with addr=10.0.0.2, port=4420 00:20:13.524 [2024-07-15 17:09:03.735693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33d40 is same with the state(5) to be set 00:20:13.524 [2024-07-15 17:09:03.735711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d33d40 (9): Bad file descriptor 00:20:13.524 [2024-07-15 17:09:03.735754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:13.524 [2024-07-15 17:09:03.735766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:13.524 [2024-07-15 17:09:03.735777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:13.524 [2024-07-15 17:09:03.735797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.524 [2024-07-15 17:09:03.735814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:13.524 17:09:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:14.458 [2024-07-15 17:09:04.735959] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.458 [2024-07-15 17:09:04.736030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d33d40 with addr=10.0.0.2, port=4420 00:20:14.458 [2024-07-15 17:09:04.736047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33d40 is same with the state(5) to be set 00:20:14.458 [2024-07-15 17:09:04.736074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d33d40 (9): Bad file descriptor 00:20:14.458 [2024-07-15 17:09:04.736109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:14.458 [2024-07-15 17:09:04.736121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:14.458 [2024-07-15 17:09:04.736132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:14.458 [2024-07-15 17:09:04.736159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.458 [2024-07-15 17:09:04.736171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:15.833 [2024-07-15 17:09:05.736311] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:15.833 [2024-07-15 17:09:05.736405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d33d40 with addr=10.0.0.2, port=4420 00:20:15.833 [2024-07-15 17:09:05.736441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33d40 is same with the state(5) to be set 00:20:15.833 [2024-07-15 17:09:05.736473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d33d40 (9): Bad file descriptor 00:20:15.833 [2024-07-15 17:09:05.736509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:15.833 [2024-07-15 17:09:05.736521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:15.833 [2024-07-15 17:09:05.736533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:15.833 [2024-07-15 17:09:05.736560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:15.834 [2024-07-15 17:09:05.736572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:16.769 [2024-07-15 17:09:06.740069] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:16.769 [2024-07-15 17:09:06.740128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d33d40 with addr=10.0.0.2, port=4420 00:20:16.769 [2024-07-15 17:09:06.740161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33d40 is same with the state(5) to be set 00:20:16.769 [2024-07-15 17:09:06.740425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d33d40 (9): Bad file descriptor 00:20:16.769 [2024-07-15 17:09:06.740680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:16.769 [2024-07-15 17:09:06.740699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:16.769 [2024-07-15 17:09:06.740711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:16.769 [2024-07-15 17:09:06.744580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:16.769 [2024-07-15 17:09:06.744612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:16.769 17:09:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.769 [2024-07-15 17:09:06.992236] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.769 17:09:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82325 00:20:17.702 [2024-07-15 17:09:07.780174] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:23.065 00:20:23.065 Latency(us) 00:20:23.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.065 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:23.065 Verification LBA range: start 0x0 length 0x4000 00:20:23.065 NVMe0n1 : 10.01 5453.98 21.30 3683.08 0.00 13980.23 681.43 3019898.88 00:20:23.065 =================================================================================================================== 00:20:23.065 Total : 5453.98 21.30 3683.08 0.00 13980.23 0.00 3019898.88 00:20:23.065 0 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82200 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82200 ']' 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82200 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82200 00:20:23.065 killing process with pid 82200 00:20:23.065 Received shutdown signal, test time was about 10.000000 seconds 00:20:23.065 00:20:23.065 Latency(us) 00:20:23.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.065 =================================================================================================================== 00:20:23.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82200' 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82200 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82200 00:20:23.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82439 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82439 /var/tmp/bdevperf.sock 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82439 ']' 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.065 17:09:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:23.065 [2024-07-15 17:09:12.946825] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:23.065 [2024-07-15 17:09:12.946914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82439 ] 00:20:23.065 [2024-07-15 17:09:13.080810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.065 [2024-07-15 17:09:13.196098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.065 [2024-07-15 17:09:13.248860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:23.630 17:09:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.630 17:09:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:23.630 17:09:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82454 00:20:23.630 17:09:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82439 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:23.630 17:09:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:23.888 17:09:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:24.454 NVMe0n1 00:20:24.454 17:09:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82497 00:20:24.454 17:09:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:24.454 17:09:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:24.454 Running I/O for 10 seconds... 00:20:25.384 17:09:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.643 [2024-07-15 17:09:15.715520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.643 [2024-07-15 17:09:15.715609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.715624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.643 [2024-07-15 17:09:15.715634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.715645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.643 [2024-07-15 17:09:15.715653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.715664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.643 [2024-07-15 17:09:15.715673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.715682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1668c00 is same with the state(5) to be set 00:20:25.643 [2024-07-15 17:09:15.715927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.715946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.715966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.715977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.715988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.715998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.643 [2024-07-15 17:09:15.716483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.643 [2024-07-15 17:09:15.716494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.716989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.716998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.644 [2024-07-15 17:09:15.717351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.644 [2024-07-15 17:09:15.717371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.717985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.717996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.718016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.718037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.718056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.718076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.718095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.718115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.718134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.718154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.718174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.718194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.645 [2024-07-15 17:09:15.718214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.645 [2024-07-15 17:09:15.718228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.646 [2024-07-15 17:09:15.718568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7310 is same with the state(5) to be set 00:20:25.646 [2024-07-15 17:09:15.718589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:25.646 [2024-07-15 17:09:15.718597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:25.646 [2024-07-15 17:09:15.718605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102368 len:8 PRP1 0x0 PRP2 0x0 00:20:25.646 [2024-07-15 17:09:15.718619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.646 [2024-07-15 17:09:15.718671] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16d7310 was disconnected and freed. reset controller. 00:20:25.646 [2024-07-15 17:09:15.718928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:25.646 [2024-07-15 17:09:15.718959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1668c00 (9): Bad file descriptor 00:20:25.646 [2024-07-15 17:09:15.719058] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.646 [2024-07-15 17:09:15.719081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1668c00 with addr=10.0.0.2, port=4420 00:20:25.646 [2024-07-15 17:09:15.719092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1668c00 is same with the state(5) to be set 00:20:25.646 [2024-07-15 17:09:15.719110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1668c00 (9): Bad file descriptor 00:20:25.646 [2024-07-15 17:09:15.719126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:25.646 [2024-07-15 17:09:15.719136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:25.646 [2024-07-15 17:09:15.719146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:25.646 [2024-07-15 17:09:15.719165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:25.646 [2024-07-15 17:09:15.719176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:25.646 17:09:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82497 00:20:27.543 [2024-07-15 17:09:17.719553] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:27.543 [2024-07-15 17:09:17.719815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1668c00 with addr=10.0.0.2, port=4420 00:20:27.543 [2024-07-15 17:09:17.720037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1668c00 is same with the state(5) to be set 00:20:27.543 [2024-07-15 17:09:17.720273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1668c00 (9): Bad file descriptor 00:20:27.543 [2024-07-15 17:09:17.720538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:27.543 [2024-07-15 17:09:17.720695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:27.543 [2024-07-15 17:09:17.720836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:27.543 [2024-07-15 17:09:17.720965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:27.543 [2024-07-15 17:09:17.721103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:29.441 [2024-07-15 17:09:19.721320] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.441 [2024-07-15 17:09:19.721408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1668c00 with addr=10.0.0.2, port=4420 00:20:29.441 [2024-07-15 17:09:19.721427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1668c00 is same with the state(5) to be set 00:20:29.441 [2024-07-15 17:09:19.721455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1668c00 (9): Bad file descriptor 00:20:29.441 [2024-07-15 17:09:19.721475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:29.441 [2024-07-15 17:09:19.721485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:29.441 [2024-07-15 17:09:19.721497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:29.441 [2024-07-15 17:09:19.721525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:29.441 [2024-07-15 17:09:19.721537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:32.030 [2024-07-15 17:09:21.721623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:32.030 [2024-07-15 17:09:21.721690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:32.030 [2024-07-15 17:09:21.721703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:32.030 [2024-07-15 17:09:21.721714] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:32.030 [2024-07-15 17:09:21.721742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:32.597 00:20:32.597 Latency(us) 00:20:32.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.597 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:32.597 NVMe0n1 : 8.15 2133.80 8.34 15.70 0.00 59498.48 8043.05 7015926.69 00:20:32.597 =================================================================================================================== 00:20:32.597 Total : 2133.80 8.34 15.70 0.00 59498.48 8043.05 7015926.69 00:20:32.597 0 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:32.597 Attaching 5 probes... 00:20:32.597 1284.024179: reset bdev controller NVMe0 00:20:32.597 1284.096797: reconnect bdev controller NVMe0 00:20:32.597 3284.481258: reconnect delay bdev controller NVMe0 00:20:32.597 3284.529654: reconnect bdev controller NVMe0 00:20:32.597 5286.265103: reconnect delay bdev controller NVMe0 00:20:32.597 5286.293403: reconnect bdev controller NVMe0 00:20:32.597 7286.686183: reconnect delay bdev controller NVMe0 00:20:32.597 7286.709020: reconnect bdev controller NVMe0 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82454 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82439 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82439 ']' 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82439 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82439 00:20:32.597 killing process with pid 82439 00:20:32.597 Received shutdown signal, test time was about 8.212409 seconds 00:20:32.597 00:20:32.597 Latency(us) 00:20:32.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.597 =================================================================================================================== 00:20:32.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82439' 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82439 00:20:32.597 17:09:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82439 00:20:32.854 17:09:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:33.112 17:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:33.112 17:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:33.112 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:33.112 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:33.112 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:33.112 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:33.112 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.112 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:33.112 rmmod nvme_tcp 00:20:33.112 rmmod nvme_fabrics 00:20:33.112 rmmod nvme_keyring 00:20:33.112 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82001 ']' 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82001 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82001 ']' 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82001 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82001 00:20:33.370 killing process with pid 82001 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82001' 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82001 00:20:33.370 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82001 00:20:33.627 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.627 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.627 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.627 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.627 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.627 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.627 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.627 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.627 17:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:33.627 00:20:33.627 real 0m47.112s 00:20:33.627 user 2m18.414s 00:20:33.627 sys 0m5.587s 00:20:33.627 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:33.627 17:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:33.627 ************************************ 00:20:33.627 END TEST nvmf_timeout 00:20:33.627 ************************************ 00:20:33.627 17:09:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:33.627 17:09:23 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:20:33.627 17:09:23 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:20:33.627 17:09:23 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:33.627 17:09:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:33.627 17:09:23 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:20:33.627 00:20:33.627 real 12m14.782s 00:20:33.627 user 29m55.030s 00:20:33.627 sys 3m0.189s 00:20:33.627 ************************************ 00:20:33.627 END TEST nvmf_tcp 00:20:33.627 ************************************ 00:20:33.627 17:09:23 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:33.627 17:09:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:33.627 17:09:23 -- common/autotest_common.sh@1142 -- # return 0 00:20:33.627 17:09:23 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:20:33.627 17:09:23 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:33.627 17:09:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:33.627 17:09:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:33.627 17:09:23 -- common/autotest_common.sh@10 -- # set +x 00:20:33.627 ************************************ 00:20:33.627 START TEST nvmf_dif 00:20:33.627 ************************************ 00:20:33.627 17:09:23 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:33.627 * Looking for test storage... 00:20:33.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:33.627 17:09:23 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.627 17:09:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:33.887 17:09:23 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.887 17:09:23 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.887 17:09:23 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.887 17:09:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.887 17:09:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.887 17:09:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.887 17:09:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:33.887 17:09:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.887 17:09:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:33.887 17:09:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:33.887 17:09:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:33.887 17:09:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:33.887 17:09:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.887 17:09:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:33.887 17:09:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:33.887 Cannot find device "nvmf_tgt_br" 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:33.887 Cannot find device "nvmf_tgt_br2" 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:33.887 Cannot find device "nvmf_tgt_br" 00:20:33.887 17:09:23 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:33.887 Cannot find device "nvmf_tgt_br2" 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:33.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:33.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:33.887 17:09:24 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:34.146 17:09:24 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:34.146 17:09:24 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:34.146 17:09:24 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:34.146 17:09:24 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:34.146 17:09:24 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:34.146 17:09:24 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:34.146 17:09:24 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:34.146 17:09:24 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:34.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:20:34.146 00:20:34.146 --- 10.0.0.2 ping statistics --- 00:20:34.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.146 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:20:34.146 17:09:24 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:34.146 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:34.146 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:20:34.146 00:20:34.146 --- 10.0.0.3 ping statistics --- 00:20:34.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.146 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:34.146 17:09:24 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:34.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:34.147 00:20:34.147 --- 10.0.0.1 ping statistics --- 00:20:34.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.147 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:34.147 17:09:24 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.147 17:09:24 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:34.147 17:09:24 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:34.147 17:09:24 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:34.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:34.405 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:34.405 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:34.405 17:09:24 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.405 17:09:24 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:34.405 17:09:24 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:34.405 17:09:24 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.405 17:09:24 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:34.405 17:09:24 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:34.405 17:09:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:34.405 17:09:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:34.405 17:09:24 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:34.405 17:09:24 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:34.405 17:09:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:34.405 17:09:24 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82929 00:20:34.405 17:09:24 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82929 00:20:34.405 17:09:24 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 82929 ']' 00:20:34.405 17:09:24 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.405 17:09:24 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:34.405 17:09:24 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.405 17:09:24 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.405 17:09:24 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.406 17:09:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:34.664 [2024-07-15 17:09:24.719818] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:34.664 [2024-07-15 17:09:24.719917] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.664 [2024-07-15 17:09:24.862575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.922 [2024-07-15 17:09:24.986476] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.922 [2024-07-15 17:09:24.986534] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.922 [2024-07-15 17:09:24.986549] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.922 [2024-07-15 17:09:24.986560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.922 [2024-07-15 17:09:24.986569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.922 [2024-07-15 17:09:24.986599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.922 [2024-07-15 17:09:25.042442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:35.490 17:09:25 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.490 17:09:25 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:20:35.490 17:09:25 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:35.490 17:09:25 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:35.490 17:09:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:35.490 17:09:25 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.490 17:09:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:35.490 17:09:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:35.490 17:09:25 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.490 17:09:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:35.490 [2024-07-15 17:09:25.709244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.490 17:09:25 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.490 17:09:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:35.490 17:09:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:35.490 17:09:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.490 17:09:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:35.490 ************************************ 00:20:35.490 START TEST fio_dif_1_default 00:20:35.490 ************************************ 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:35.490 bdev_null0 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:35.490 [2024-07-15 17:09:25.757341] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:35.490 17:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:35.490 { 00:20:35.490 "params": { 00:20:35.490 "name": "Nvme$subsystem", 00:20:35.490 "trtype": "$TEST_TRANSPORT", 00:20:35.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.490 "adrfam": "ipv4", 00:20:35.491 "trsvcid": "$NVMF_PORT", 00:20:35.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.491 "hdgst": ${hdgst:-false}, 00:20:35.491 "ddgst": ${ddgst:-false} 00:20:35.491 }, 00:20:35.491 "method": "bdev_nvme_attach_controller" 00:20:35.491 } 00:20:35.491 EOF 00:20:35.491 )") 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:35.491 17:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:35.491 "params": { 00:20:35.491 "name": "Nvme0", 00:20:35.491 "trtype": "tcp", 00:20:35.491 "traddr": "10.0.0.2", 00:20:35.491 "adrfam": "ipv4", 00:20:35.491 "trsvcid": "4420", 00:20:35.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:35.491 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:35.491 "hdgst": false, 00:20:35.491 "ddgst": false 00:20:35.491 }, 00:20:35.491 "method": "bdev_nvme_attach_controller" 00:20:35.491 }' 00:20:35.750 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:35.750 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:35.750 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:35.750 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:35.750 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:35.750 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:35.750 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:35.750 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:35.750 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:35.750 17:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:35.750 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:35.750 fio-3.35 00:20:35.750 Starting 1 thread 00:20:47.968 00:20:47.968 filename0: (groupid=0, jobs=1): err= 0: pid=82996: Mon Jul 15 17:09:36 2024 00:20:47.968 read: IOPS=8689, BW=33.9MiB/s (35.6MB/s)(339MiB/10001msec) 00:20:47.968 slat (usec): min=6, max=292, avg= 8.78, stdev= 3.95 00:20:47.968 clat (usec): min=363, max=5716, avg=434.43, stdev=46.44 00:20:47.968 lat (usec): min=370, max=5752, avg=443.21, stdev=46.95 00:20:47.968 clat percentiles (usec): 00:20:47.968 | 1.00th=[ 388], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 416], 00:20:47.968 | 30.00th=[ 420], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 437], 00:20:47.968 | 70.00th=[ 441], 80.00th=[ 449], 90.00th=[ 461], 95.00th=[ 469], 00:20:47.968 | 99.00th=[ 545], 99.50th=[ 586], 99.90th=[ 693], 99.95th=[ 857], 00:20:47.968 | 99.99th=[ 1139] 00:20:47.968 bw ( KiB/s): min=33440, max=35232, per=100.00%, avg=34777.26, stdev=388.36, samples=19 00:20:47.968 iops : min= 8360, max= 8808, avg=8694.32, stdev=97.09, samples=19 00:20:47.968 lat (usec) : 500=98.39%, 750=1.53%, 1000=0.05% 00:20:47.968 lat (msec) : 2=0.02%, 10=0.01% 00:20:47.968 cpu : usr=84.52%, sys=13.35%, ctx=123, majf=0, minf=0 00:20:47.968 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.968 issued rwts: total=86908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.968 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:47.968 00:20:47.968 Run status group 0 (all jobs): 00:20:47.968 READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=339MiB (356MB), run=10001-10001msec 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 ************************************ 00:20:47.968 END TEST fio_dif_1_default 00:20:47.968 ************************************ 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.968 00:20:47.968 real 0m10.975s 00:20:47.968 user 0m9.057s 00:20:47.968 sys 0m1.601s 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 17:09:36 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:47.968 17:09:36 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:47.968 17:09:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:47.968 17:09:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 ************************************ 00:20:47.968 START TEST fio_dif_1_multi_subsystems 00:20:47.968 ************************************ 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 bdev_null0 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 [2024-07-15 17:09:36.784194] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 bdev_null1 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:47.968 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.968 { 00:20:47.968 "params": { 00:20:47.968 "name": "Nvme$subsystem", 00:20:47.968 "trtype": "$TEST_TRANSPORT", 00:20:47.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.968 "adrfam": "ipv4", 00:20:47.968 "trsvcid": "$NVMF_PORT", 00:20:47.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.968 "hdgst": ${hdgst:-false}, 00:20:47.968 "ddgst": ${ddgst:-false} 00:20:47.968 }, 00:20:47.968 "method": "bdev_nvme_attach_controller" 00:20:47.968 } 00:20:47.969 EOF 00:20:47.969 )") 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.969 { 00:20:47.969 "params": { 00:20:47.969 "name": "Nvme$subsystem", 00:20:47.969 "trtype": "$TEST_TRANSPORT", 00:20:47.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.969 "adrfam": "ipv4", 00:20:47.969 "trsvcid": "$NVMF_PORT", 00:20:47.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.969 "hdgst": ${hdgst:-false}, 00:20:47.969 "ddgst": ${ddgst:-false} 00:20:47.969 }, 00:20:47.969 "method": "bdev_nvme_attach_controller" 00:20:47.969 } 00:20:47.969 EOF 00:20:47.969 )") 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:47.969 "params": { 00:20:47.969 "name": "Nvme0", 00:20:47.969 "trtype": "tcp", 00:20:47.969 "traddr": "10.0.0.2", 00:20:47.969 "adrfam": "ipv4", 00:20:47.969 "trsvcid": "4420", 00:20:47.969 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:47.969 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:47.969 "hdgst": false, 00:20:47.969 "ddgst": false 00:20:47.969 }, 00:20:47.969 "method": "bdev_nvme_attach_controller" 00:20:47.969 },{ 00:20:47.969 "params": { 00:20:47.969 "name": "Nvme1", 00:20:47.969 "trtype": "tcp", 00:20:47.969 "traddr": "10.0.0.2", 00:20:47.969 "adrfam": "ipv4", 00:20:47.969 "trsvcid": "4420", 00:20:47.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.969 "hdgst": false, 00:20:47.969 "ddgst": false 00:20:47.969 }, 00:20:47.969 "method": "bdev_nvme_attach_controller" 00:20:47.969 }' 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:47.969 17:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.969 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:47.969 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:47.969 fio-3.35 00:20:47.969 Starting 2 threads 00:20:57.963 00:20:57.963 filename0: (groupid=0, jobs=1): err= 0: pid=83154: Mon Jul 15 17:09:47 2024 00:20:57.963 read: IOPS=4852, BW=19.0MiB/s (19.9MB/s)(190MiB/10001msec) 00:20:57.963 slat (nsec): min=7148, max=52852, avg=13406.59, stdev=3403.83 00:20:57.963 clat (usec): min=661, max=2788, avg=787.00, stdev=33.43 00:20:57.963 lat (usec): min=679, max=2834, avg=800.41, stdev=33.78 00:20:57.963 clat percentiles (usec): 00:20:57.963 | 1.00th=[ 734], 5.00th=[ 750], 10.00th=[ 758], 20.00th=[ 766], 00:20:57.963 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[ 783], 60.00th=[ 791], 00:20:57.963 | 70.00th=[ 799], 80.00th=[ 807], 90.00th=[ 816], 95.00th=[ 832], 00:20:57.963 | 99.00th=[ 857], 99.50th=[ 865], 99.90th=[ 922], 99.95th=[ 1057], 00:20:57.963 | 99.99th=[ 1401] 00:20:57.963 bw ( KiB/s): min=19136, max=19680, per=50.07%, avg=19437.47, stdev=145.21, samples=19 00:20:57.963 iops : min= 4784, max= 4920, avg=4859.37, stdev=36.30, samples=19 00:20:57.963 lat (usec) : 750=6.76%, 1000=93.18% 00:20:57.963 lat (msec) : 2=0.05%, 4=0.01% 00:20:57.963 cpu : usr=90.30%, sys=8.37%, ctx=20, majf=0, minf=0 00:20:57.963 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.963 issued rwts: total=48532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.963 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:57.963 filename1: (groupid=0, jobs=1): err= 0: pid=83155: Mon Jul 15 17:09:47 2024 00:20:57.963 read: IOPS=4852, BW=19.0MiB/s (19.9MB/s)(190MiB/10001msec) 00:20:57.963 slat (nsec): min=7200, max=53151, avg=13197.74, stdev=3316.26 00:20:57.963 clat (usec): min=634, max=2773, avg=788.25, stdev=43.62 00:20:57.963 lat (usec): min=650, max=2818, avg=801.44, stdev=44.67 00:20:57.963 clat percentiles (usec): 00:20:57.963 | 1.00th=[ 693], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 758], 00:20:57.963 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[ 791], 60.00th=[ 799], 00:20:57.963 | 70.00th=[ 807], 80.00th=[ 816], 90.00th=[ 832], 95.00th=[ 848], 00:20:57.963 | 99.00th=[ 873], 99.50th=[ 881], 99.90th=[ 947], 99.95th=[ 1037], 00:20:57.963 | 99.99th=[ 1418] 00:20:57.963 bw ( KiB/s): min=19136, max=19680, per=50.07%, avg=19437.47, stdev=145.21, samples=19 00:20:57.963 iops : min= 4784, max= 4920, avg=4859.37, stdev=36.30, samples=19 00:20:57.963 lat (usec) : 750=15.30%, 1000=84.64% 00:20:57.963 lat (msec) : 2=0.05%, 4=0.01% 00:20:57.963 cpu : usr=90.30%, sys=8.31%, ctx=19, majf=0, minf=9 00:20:57.963 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.963 issued rwts: total=48532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.963 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:57.963 00:20:57.963 Run status group 0 (all jobs): 00:20:57.963 READ: bw=37.9MiB/s (39.8MB/s), 19.0MiB/s-19.0MiB/s (19.9MB/s-19.9MB/s), io=379MiB (398MB), run=10001-10001msec 00:20:57.963 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:57.963 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:57.963 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:57.963 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:57.963 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:57.963 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:57.963 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.963 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:57.963 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.963 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:57.964 ************************************ 00:20:57.964 END TEST fio_dif_1_multi_subsystems 00:20:57.964 ************************************ 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.964 00:20:57.964 real 0m11.138s 00:20:57.964 user 0m18.832s 00:20:57.964 sys 0m1.946s 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.964 17:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:57.964 17:09:47 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:57.964 17:09:47 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:57.964 17:09:47 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:57.964 17:09:47 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.964 17:09:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:57.964 ************************************ 00:20:57.964 START TEST fio_dif_rand_params 00:20:57.964 ************************************ 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:57.964 bdev_null0 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:57.964 [2024-07-15 17:09:47.975851] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:57.964 { 00:20:57.964 "params": { 00:20:57.964 "name": "Nvme$subsystem", 00:20:57.964 "trtype": "$TEST_TRANSPORT", 00:20:57.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.964 "adrfam": "ipv4", 00:20:57.964 "trsvcid": "$NVMF_PORT", 00:20:57.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.964 "hdgst": ${hdgst:-false}, 00:20:57.964 "ddgst": ${ddgst:-false} 00:20:57.964 }, 00:20:57.964 "method": "bdev_nvme_attach_controller" 00:20:57.964 } 00:20:57.964 EOF 00:20:57.964 )") 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:57.964 "params": { 00:20:57.964 "name": "Nvme0", 00:20:57.964 "trtype": "tcp", 00:20:57.964 "traddr": "10.0.0.2", 00:20:57.964 "adrfam": "ipv4", 00:20:57.964 "trsvcid": "4420", 00:20:57.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:57.964 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:57.964 "hdgst": false, 00:20:57.964 "ddgst": false 00:20:57.964 }, 00:20:57.964 "method": "bdev_nvme_attach_controller" 00:20:57.964 }' 00:20:57.964 17:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:57.964 17:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:57.964 17:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:57.964 17:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.964 17:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:57.964 17:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:57.964 17:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:57.964 17:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:57.964 17:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:57.964 17:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:57.964 17:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:57.964 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:57.964 ... 00:20:57.964 fio-3.35 00:20:57.964 Starting 3 threads 00:21:04.528 00:21:04.528 filename0: (groupid=0, jobs=1): err= 0: pid=83311: Mon Jul 15 17:09:53 2024 00:21:04.528 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5008msec) 00:21:04.528 slat (nsec): min=7894, max=70003, avg=17570.92, stdev=5974.31 00:21:04.528 clat (usec): min=11339, max=13038, avg=11551.03, stdev=125.32 00:21:04.528 lat (usec): min=11353, max=13064, avg=11568.60, stdev=126.28 00:21:04.528 clat percentiles (usec): 00:21:04.528 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:21:04.528 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11600], 00:21:04.528 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11731], 95.00th=[11731], 00:21:04.528 | 99.00th=[11994], 99.50th=[12125], 99.90th=[13042], 99.95th=[13042], 00:21:04.528 | 99.99th=[13042] 00:21:04.528 bw ( KiB/s): min=32256, max=33792, per=33.34%, avg=33107.40, stdev=435.16, samples=10 00:21:04.528 iops : min= 252, max= 264, avg=258.60, stdev= 3.41, samples=10 00:21:04.528 lat (msec) : 20=100.00% 00:21:04.528 cpu : usr=90.87%, sys=8.53%, ctx=9, majf=0, minf=9 00:21:04.528 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:04.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.528 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:04.528 filename0: (groupid=0, jobs=1): err= 0: pid=83312: Mon Jul 15 17:09:53 2024 00:21:04.528 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5012msec) 00:21:04.528 slat (nsec): min=5520, max=43475, avg=16164.02, stdev=6067.24 00:21:04.528 clat (usec): min=11358, max=16439, avg=11562.81, stdev=257.36 00:21:04.528 lat (usec): min=11377, max=16472, avg=11578.97, stdev=258.12 00:21:04.528 clat percentiles (usec): 00:21:04.528 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:21:04.528 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11600], 00:21:04.528 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11731], 95.00th=[11731], 00:21:04.528 | 99.00th=[11994], 99.50th=[12125], 99.90th=[16450], 99.95th=[16450], 00:21:04.528 | 99.99th=[16450] 00:21:04.528 bw ( KiB/s): min=32191, max=33792, per=33.33%, avg=33094.30, stdev=450.20, samples=10 00:21:04.528 iops : min= 251, max= 264, avg=258.50, stdev= 3.63, samples=10 00:21:04.528 lat (msec) : 20=100.00% 00:21:04.528 cpu : usr=91.62%, sys=7.84%, ctx=6, majf=0, minf=9 00:21:04.528 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:04.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.528 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.528 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:04.528 filename0: (groupid=0, jobs=1): err= 0: pid=83313: Mon Jul 15 17:09:53 2024 00:21:04.528 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5009msec) 00:21:04.528 slat (nsec): min=7724, max=51232, avg=17311.83, stdev=5559.90 00:21:04.528 clat (usec): min=11339, max=14006, avg=11555.00, stdev=156.67 00:21:04.528 lat (usec): min=11352, max=14032, avg=11572.31, stdev=157.58 00:21:04.528 clat percentiles (usec): 00:21:04.528 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:21:04.529 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11600], 00:21:04.529 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11731], 95.00th=[11731], 00:21:04.529 | 99.00th=[11994], 99.50th=[12125], 99.90th=[13960], 99.95th=[13960], 00:21:04.529 | 99.99th=[13960] 00:21:04.529 bw ( KiB/s): min=32256, max=33792, per=33.34%, avg=33100.80, stdev=435.95, samples=10 00:21:04.529 iops : min= 252, max= 264, avg=258.60, stdev= 3.41, samples=10 00:21:04.529 lat (msec) : 20=100.00% 00:21:04.529 cpu : usr=91.01%, sys=8.45%, ctx=4, majf=0, minf=9 00:21:04.529 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:04.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.529 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.529 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:04.529 00:21:04.529 Run status group 0 (all jobs): 00:21:04.529 READ: bw=97.0MiB/s (102MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=486MiB (510MB), run=5008-5012msec 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 bdev_null0 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 [2024-07-15 17:09:54.001901] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 bdev_null1 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 bdev_null2 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.529 { 00:21:04.529 "params": { 00:21:04.529 "name": "Nvme$subsystem", 00:21:04.529 "trtype": "$TEST_TRANSPORT", 00:21:04.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.529 "adrfam": "ipv4", 00:21:04.529 "trsvcid": "$NVMF_PORT", 00:21:04.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.529 "hdgst": ${hdgst:-false}, 00:21:04.529 "ddgst": ${ddgst:-false} 00:21:04.529 }, 00:21:04.529 "method": "bdev_nvme_attach_controller" 00:21:04.529 } 00:21:04.529 EOF 00:21:04.529 )") 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.529 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.529 { 00:21:04.529 "params": { 00:21:04.529 "name": "Nvme$subsystem", 00:21:04.530 "trtype": "$TEST_TRANSPORT", 00:21:04.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.530 "adrfam": "ipv4", 00:21:04.530 "trsvcid": "$NVMF_PORT", 00:21:04.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.530 "hdgst": ${hdgst:-false}, 00:21:04.530 "ddgst": ${ddgst:-false} 00:21:04.530 }, 00:21:04.530 "method": "bdev_nvme_attach_controller" 00:21:04.530 } 00:21:04.530 EOF 00:21:04.530 )") 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:04.530 { 00:21:04.530 "params": { 00:21:04.530 "name": "Nvme$subsystem", 00:21:04.530 "trtype": "$TEST_TRANSPORT", 00:21:04.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.530 "adrfam": "ipv4", 00:21:04.530 "trsvcid": "$NVMF_PORT", 00:21:04.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.530 "hdgst": ${hdgst:-false}, 00:21:04.530 "ddgst": ${ddgst:-false} 00:21:04.530 }, 00:21:04.530 "method": "bdev_nvme_attach_controller" 00:21:04.530 } 00:21:04.530 EOF 00:21:04.530 )") 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:04.530 "params": { 00:21:04.530 "name": "Nvme0", 00:21:04.530 "trtype": "tcp", 00:21:04.530 "traddr": "10.0.0.2", 00:21:04.530 "adrfam": "ipv4", 00:21:04.530 "trsvcid": "4420", 00:21:04.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:04.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:04.530 "hdgst": false, 00:21:04.530 "ddgst": false 00:21:04.530 }, 00:21:04.530 "method": "bdev_nvme_attach_controller" 00:21:04.530 },{ 00:21:04.530 "params": { 00:21:04.530 "name": "Nvme1", 00:21:04.530 "trtype": "tcp", 00:21:04.530 "traddr": "10.0.0.2", 00:21:04.530 "adrfam": "ipv4", 00:21:04.530 "trsvcid": "4420", 00:21:04.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.530 "hdgst": false, 00:21:04.530 "ddgst": false 00:21:04.530 }, 00:21:04.530 "method": "bdev_nvme_attach_controller" 00:21:04.530 },{ 00:21:04.530 "params": { 00:21:04.530 "name": "Nvme2", 00:21:04.530 "trtype": "tcp", 00:21:04.530 "traddr": "10.0.0.2", 00:21:04.530 "adrfam": "ipv4", 00:21:04.530 "trsvcid": "4420", 00:21:04.530 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:04.530 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:04.530 "hdgst": false, 00:21:04.530 "ddgst": false 00:21:04.530 }, 00:21:04.530 "method": "bdev_nvme_attach_controller" 00:21:04.530 }' 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:04.530 17:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:04.530 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:04.530 ... 00:21:04.530 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:04.530 ... 00:21:04.530 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:04.530 ... 00:21:04.530 fio-3.35 00:21:04.530 Starting 24 threads 00:21:16.734 00:21:16.734 filename0: (groupid=0, jobs=1): err= 0: pid=83408: Mon Jul 15 17:10:05 2024 00:21:16.734 read: IOPS=216, BW=865KiB/s (886kB/s)(8680KiB/10035msec) 00:21:16.734 slat (usec): min=5, max=4030, avg=20.10, stdev=149.26 00:21:16.734 clat (msec): min=7, max=147, avg=73.78, stdev=22.29 00:21:16.734 lat (msec): min=7, max=147, avg=73.80, stdev=22.29 00:21:16.734 clat percentiles (msec): 00:21:16.734 | 1.00th=[ 8], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:21:16.734 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:21:16.734 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 111], 00:21:16.734 | 99.00th=[ 122], 99.50th=[ 128], 99.90th=[ 140], 99.95th=[ 146], 00:21:16.734 | 99.99th=[ 148] 00:21:16.734 bw ( KiB/s): min= 632, max= 1394, per=4.17%, avg=864.40, stdev=169.32, samples=20 00:21:16.734 iops : min= 158, max= 348, avg=216.05, stdev=42.26, samples=20 00:21:16.734 lat (msec) : 10=2.03%, 20=0.18%, 50=10.78%, 100=72.40%, 250=14.61% 00:21:16.734 cpu : usr=41.88%, sys=2.59%, ctx=1614, majf=0, minf=9 00:21:16.734 IO depths : 1=0.1%, 2=1.7%, 4=6.8%, 8=76.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:16.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.734 complete : 0=0.0%, 4=89.1%, 8=9.4%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.734 issued rwts: total=2170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.734 filename0: (groupid=0, jobs=1): err= 0: pid=83409: Mon Jul 15 17:10:05 2024 00:21:16.734 read: IOPS=218, BW=875KiB/s (896kB/s)(8792KiB/10044msec) 00:21:16.734 slat (usec): min=5, max=5025, avg=18.36, stdev=136.96 00:21:16.734 clat (msec): min=6, max=140, avg=72.92, stdev=22.01 00:21:16.734 lat (msec): min=6, max=140, avg=72.93, stdev=22.01 00:21:16.734 clat percentiles (msec): 00:21:16.734 | 1.00th=[ 10], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 55], 00:21:16.734 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:21:16.734 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 112], 00:21:16.734 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 140], 99.95th=[ 140], 00:21:16.734 | 99.99th=[ 142] 00:21:16.734 bw ( KiB/s): min= 640, max= 1280, per=4.22%, avg=874.85, stdev=139.31, samples=20 00:21:16.734 iops : min= 160, max= 320, avg=218.70, stdev=34.84, samples=20 00:21:16.734 lat (msec) : 10=1.36%, 20=1.46%, 50=12.33%, 100=72.47%, 250=12.37% 00:21:16.734 cpu : usr=39.76%, sys=2.42%, ctx=1139, majf=0, minf=9 00:21:16.734 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=80.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:16.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.734 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.734 issued rwts: total=2198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.734 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.734 filename0: (groupid=0, jobs=1): err= 0: pid=83410: Mon Jul 15 17:10:05 2024 00:21:16.734 read: IOPS=223, BW=895KiB/s (916kB/s)(8972KiB/10029msec) 00:21:16.734 slat (usec): min=7, max=5036, avg=21.69, stdev=161.43 00:21:16.734 clat (msec): min=29, max=148, avg=71.41, stdev=20.03 00:21:16.734 lat (msec): min=29, max=148, avg=71.43, stdev=20.03 00:21:16.734 clat percentiles (msec): 00:21:16.734 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:21:16.734 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:21:16.734 | 70.00th=[ 79], 80.00th=[ 88], 90.00th=[ 102], 95.00th=[ 111], 00:21:16.735 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 128], 99.95th=[ 146], 00:21:16.735 | 99.99th=[ 148] 00:21:16.735 bw ( KiB/s): min= 632, max= 1024, per=4.29%, avg=890.05, stdev=111.26, samples=20 00:21:16.735 iops : min= 158, max= 256, avg=222.45, stdev=27.81, samples=20 00:21:16.735 lat (msec) : 50=16.36%, 100=72.89%, 250=10.74% 00:21:16.735 cpu : usr=41.37%, sys=2.69%, ctx=1468, majf=0, minf=9 00:21:16.735 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:16.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 issued rwts: total=2243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.735 filename0: (groupid=0, jobs=1): err= 0: pid=83411: Mon Jul 15 17:10:05 2024 00:21:16.735 read: IOPS=215, BW=860KiB/s (881kB/s)(8628KiB/10027msec) 00:21:16.735 slat (usec): min=7, max=8028, avg=21.76, stdev=243.99 00:21:16.735 clat (msec): min=25, max=143, avg=74.22, stdev=20.24 00:21:16.735 lat (msec): min=25, max=143, avg=74.25, stdev=20.24 00:21:16.735 clat percentiles (msec): 00:21:16.735 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:21:16.735 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:21:16.735 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:21:16.735 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:21:16.735 | 99.99th=[ 144] 00:21:16.735 bw ( KiB/s): min= 640, max= 1017, per=4.14%, avg=858.45, stdev=119.98, samples=20 00:21:16.735 iops : min= 160, max= 254, avg=214.55, stdev=29.98, samples=20 00:21:16.735 lat (msec) : 50=17.34%, 100=70.56%, 250=12.10% 00:21:16.735 cpu : usr=31.15%, sys=2.05%, ctx=843, majf=0, minf=9 00:21:16.735 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=79.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:16.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.735 filename0: (groupid=0, jobs=1): err= 0: pid=83412: Mon Jul 15 17:10:05 2024 00:21:16.735 read: IOPS=223, BW=893KiB/s (914kB/s)(8940KiB/10013msec) 00:21:16.735 slat (usec): min=7, max=8022, avg=23.07, stdev=207.58 00:21:16.735 clat (msec): min=15, max=151, avg=71.56, stdev=21.59 00:21:16.735 lat (msec): min=15, max=151, avg=71.58, stdev=21.59 00:21:16.735 clat percentiles (msec): 00:21:16.735 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 52], 00:21:16.735 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:21:16.735 | 70.00th=[ 79], 80.00th=[ 87], 90.00th=[ 106], 95.00th=[ 112], 00:21:16.735 | 99.00th=[ 128], 99.50th=[ 134], 99.90th=[ 134], 99.95th=[ 153], 00:21:16.735 | 99.99th=[ 153] 00:21:16.735 bw ( KiB/s): min= 636, max= 1072, per=4.26%, avg=883.63, stdev=133.29, samples=19 00:21:16.735 iops : min= 159, max= 268, avg=220.84, stdev=33.33, samples=19 00:21:16.735 lat (msec) : 20=0.31%, 50=17.40%, 100=69.31%, 250=12.98% 00:21:16.735 cpu : usr=42.40%, sys=2.67%, ctx=1330, majf=0, minf=9 00:21:16.735 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=81.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:16.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.735 filename0: (groupid=0, jobs=1): err= 0: pid=83413: Mon Jul 15 17:10:05 2024 00:21:16.735 read: IOPS=202, BW=810KiB/s (829kB/s)(8108KiB/10013msec) 00:21:16.735 slat (usec): min=5, max=8024, avg=22.60, stdev=216.15 00:21:16.735 clat (msec): min=35, max=158, avg=78.89, stdev=22.61 00:21:16.735 lat (msec): min=35, max=158, avg=78.91, stdev=22.62 00:21:16.735 clat percentiles (msec): 00:21:16.735 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:21:16.735 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:21:16.735 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 120], 00:21:16.735 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 159], 00:21:16.735 | 99.99th=[ 159] 00:21:16.735 bw ( KiB/s): min= 512, max= 1048, per=3.85%, avg=799.00, stdev=153.19, samples=19 00:21:16.735 iops : min= 128, max= 262, avg=199.68, stdev=38.31, samples=19 00:21:16.735 lat (msec) : 50=12.23%, 100=70.84%, 250=16.92% 00:21:16.735 cpu : usr=38.94%, sys=2.26%, ctx=1199, majf=0, minf=9 00:21:16.735 IO depths : 1=0.1%, 2=3.3%, 4=13.0%, 8=69.6%, 16=14.2%, 32=0.0%, >=64=0.0% 00:21:16.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 complete : 0=0.0%, 4=90.7%, 8=6.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 issued rwts: total=2027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.735 filename0: (groupid=0, jobs=1): err= 0: pid=83414: Mon Jul 15 17:10:05 2024 00:21:16.735 read: IOPS=210, BW=842KiB/s (862kB/s)(8428KiB/10015msec) 00:21:16.735 slat (usec): min=4, max=8020, avg=19.23, stdev=174.50 00:21:16.735 clat (msec): min=17, max=167, avg=75.94, stdev=20.27 00:21:16.735 lat (msec): min=17, max=167, avg=75.96, stdev=20.27 00:21:16.735 clat percentiles (msec): 00:21:16.735 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:21:16.735 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:21:16.735 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:21:16.735 | 99.00th=[ 126], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 167], 00:21:16.735 | 99.99th=[ 167] 00:21:16.735 bw ( KiB/s): min= 656, max= 1024, per=4.04%, avg=838.20, stdev=113.50, samples=20 00:21:16.735 iops : min= 164, max= 256, avg=209.45, stdev=28.44, samples=20 00:21:16.735 lat (msec) : 20=0.28%, 50=13.62%, 100=72.09%, 250=14.00% 00:21:16.735 cpu : usr=30.95%, sys=2.03%, ctx=907, majf=0, minf=9 00:21:16.735 IO depths : 1=0.1%, 2=1.8%, 4=7.0%, 8=76.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:21:16.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 complete : 0=0.0%, 4=89.0%, 8=9.4%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 issued rwts: total=2107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.735 filename0: (groupid=0, jobs=1): err= 0: pid=83415: Mon Jul 15 17:10:05 2024 00:21:16.735 read: IOPS=207, BW=829KiB/s (849kB/s)(8316KiB/10029msec) 00:21:16.735 slat (usec): min=6, max=8065, avg=26.24, stdev=304.82 00:21:16.735 clat (msec): min=36, max=141, avg=77.01, stdev=19.52 00:21:16.735 lat (msec): min=36, max=141, avg=77.04, stdev=19.52 00:21:16.735 clat percentiles (msec): 00:21:16.735 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:21:16.735 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 79], 00:21:16.735 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 113], 00:21:16.735 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 126], 99.95th=[ 131], 00:21:16.735 | 99.99th=[ 142] 00:21:16.735 bw ( KiB/s): min= 664, max= 1017, per=3.97%, avg=824.50, stdev=100.20, samples=20 00:21:16.735 iops : min= 166, max= 254, avg=206.05, stdev=25.07, samples=20 00:21:16.735 lat (msec) : 50=10.82%, 100=73.21%, 250=15.97% 00:21:16.735 cpu : usr=31.82%, sys=1.84%, ctx=1087, majf=0, minf=9 00:21:16.735 IO depths : 1=0.1%, 2=2.1%, 4=8.9%, 8=73.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:16.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 complete : 0=0.0%, 4=89.8%, 8=8.2%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.735 filename1: (groupid=0, jobs=1): err= 0: pid=83416: Mon Jul 15 17:10:05 2024 00:21:16.735 read: IOPS=219, BW=878KiB/s (900kB/s)(8820KiB/10040msec) 00:21:16.735 slat (usec): min=3, max=9026, avg=32.46, stdev=297.15 00:21:16.735 clat (msec): min=5, max=152, avg=72.62, stdev=22.98 00:21:16.735 lat (msec): min=5, max=152, avg=72.65, stdev=22.98 00:21:16.735 clat percentiles (msec): 00:21:16.735 | 1.00th=[ 8], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 53], 00:21:16.735 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:21:16.735 | 70.00th=[ 80], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 114], 00:21:16.735 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 146], 99.95th=[ 153], 00:21:16.735 | 99.99th=[ 153] 00:21:16.735 bw ( KiB/s): min= 640, max= 1392, per=4.22%, avg=875.50, stdev=163.90, samples=20 00:21:16.735 iops : min= 160, max= 348, avg=218.85, stdev=41.00, samples=20 00:21:16.735 lat (msec) : 10=2.09%, 20=0.09%, 50=14.51%, 100=68.84%, 250=14.47% 00:21:16.735 cpu : usr=42.98%, sys=2.70%, ctx=1370, majf=0, minf=9 00:21:16.735 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=78.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:16.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 issued rwts: total=2205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.735 filename1: (groupid=0, jobs=1): err= 0: pid=83417: Mon Jul 15 17:10:05 2024 00:21:16.735 read: IOPS=220, BW=881KiB/s (903kB/s)(8856KiB/10048msec) 00:21:16.735 slat (usec): min=5, max=4026, avg=18.30, stdev=122.55 00:21:16.735 clat (msec): min=3, max=160, avg=72.43, stdev=22.44 00:21:16.735 lat (msec): min=3, max=160, avg=72.45, stdev=22.44 00:21:16.735 clat percentiles (msec): 00:21:16.735 | 1.00th=[ 9], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 55], 00:21:16.735 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:21:16.735 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 106], 95.00th=[ 112], 00:21:16.735 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 128], 99.95th=[ 142], 00:21:16.735 | 99.99th=[ 161] 00:21:16.735 bw ( KiB/s): min= 632, max= 1472, per=4.25%, avg=881.25, stdev=172.15, samples=20 00:21:16.735 iops : min= 158, max= 368, avg=220.30, stdev=43.04, samples=20 00:21:16.735 lat (msec) : 4=0.05%, 10=1.26%, 20=0.95%, 50=13.60%, 100=71.23% 00:21:16.735 lat (msec) : 250=12.92% 00:21:16.735 cpu : usr=41.35%, sys=2.17%, ctx=1418, majf=0, minf=9 00:21:16.735 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.5%, 16=16.8%, 32=0.0%, >=64=0.0% 00:21:16.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.735 issued rwts: total=2214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.735 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.735 filename1: (groupid=0, jobs=1): err= 0: pid=83418: Mon Jul 15 17:10:05 2024 00:21:16.735 read: IOPS=211, BW=845KiB/s (865kB/s)(8464KiB/10021msec) 00:21:16.735 slat (usec): min=4, max=4024, avg=17.50, stdev=87.61 00:21:16.735 clat (msec): min=35, max=125, avg=75.65, stdev=19.53 00:21:16.735 lat (msec): min=35, max=125, avg=75.66, stdev=19.53 00:21:16.735 clat percentiles (msec): 00:21:16.735 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 58], 00:21:16.736 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 78], 00:21:16.736 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 110], 00:21:16.736 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:21:16.736 | 99.99th=[ 126] 00:21:16.736 bw ( KiB/s): min= 640, max= 1000, per=4.06%, avg=842.00, stdev=110.23, samples=20 00:21:16.736 iops : min= 160, max= 250, avg=210.45, stdev=27.57, samples=20 00:21:16.736 lat (msec) : 50=12.81%, 100=72.64%, 250=14.56% 00:21:16.736 cpu : usr=40.77%, sys=2.39%, ctx=1249, majf=0, minf=9 00:21:16.736 IO depths : 1=0.1%, 2=1.9%, 4=7.7%, 8=75.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:21:16.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 complete : 0=0.0%, 4=89.1%, 8=9.2%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 issued rwts: total=2116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.736 filename1: (groupid=0, jobs=1): err= 0: pid=83419: Mon Jul 15 17:10:05 2024 00:21:16.736 read: IOPS=207, BW=832KiB/s (852kB/s)(8324KiB/10010msec) 00:21:16.736 slat (usec): min=6, max=10031, avg=40.63, stdev=457.90 00:21:16.736 clat (msec): min=15, max=134, avg=76.78, stdev=19.70 00:21:16.736 lat (msec): min=15, max=134, avg=76.82, stdev=19.70 00:21:16.736 clat percentiles (msec): 00:21:16.736 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:21:16.736 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:21:16.736 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:21:16.736 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 133], 99.95th=[ 136], 00:21:16.736 | 99.99th=[ 136] 00:21:16.736 bw ( KiB/s): min= 654, max= 1024, per=3.95%, avg=818.11, stdev=109.30, samples=19 00:21:16.736 iops : min= 163, max= 256, avg=204.42, stdev=27.40, samples=19 00:21:16.736 lat (msec) : 20=0.29%, 50=12.59%, 100=73.91%, 250=13.21% 00:21:16.736 cpu : usr=35.64%, sys=2.29%, ctx=1033, majf=0, minf=9 00:21:16.736 IO depths : 1=0.1%, 2=2.4%, 4=9.6%, 8=73.4%, 16=14.6%, 32=0.0%, >=64=0.0% 00:21:16.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 complete : 0=0.0%, 4=89.6%, 8=8.3%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 issued rwts: total=2081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.736 filename1: (groupid=0, jobs=1): err= 0: pid=83420: Mon Jul 15 17:10:05 2024 00:21:16.736 read: IOPS=198, BW=793KiB/s (812kB/s)(7944KiB/10013msec) 00:21:16.736 slat (usec): min=5, max=4025, avg=19.61, stdev=131.48 00:21:16.736 clat (msec): min=15, max=157, avg=80.53, stdev=23.65 00:21:16.736 lat (msec): min=15, max=157, avg=80.55, stdev=23.65 00:21:16.736 clat percentiles (msec): 00:21:16.736 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 64], 00:21:16.736 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:21:16.736 | 70.00th=[ 91], 80.00th=[ 101], 90.00th=[ 116], 95.00th=[ 126], 00:21:16.736 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 159], 00:21:16.736 | 99.99th=[ 159] 00:21:16.736 bw ( KiB/s): min= 512, max= 1000, per=3.81%, avg=790.00, stdev=154.76, samples=20 00:21:16.736 iops : min= 128, max= 250, avg=197.45, stdev=38.74, samples=20 00:21:16.736 lat (msec) : 20=0.35%, 50=8.61%, 100=71.05%, 250=19.99% 00:21:16.736 cpu : usr=42.53%, sys=2.82%, ctx=1305, majf=0, minf=9 00:21:16.736 IO depths : 1=0.1%, 2=4.2%, 4=16.7%, 8=65.6%, 16=13.5%, 32=0.0%, >=64=0.0% 00:21:16.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 complete : 0=0.0%, 4=91.8%, 8=4.6%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.736 filename1: (groupid=0, jobs=1): err= 0: pid=83421: Mon Jul 15 17:10:05 2024 00:21:16.736 read: IOPS=218, BW=873KiB/s (894kB/s)(8756KiB/10028msec) 00:21:16.736 slat (usec): min=6, max=8027, avg=21.56, stdev=242.19 00:21:16.736 clat (msec): min=23, max=155, avg=73.14, stdev=20.67 00:21:16.736 lat (msec): min=24, max=155, avg=73.17, stdev=20.66 00:21:16.736 clat percentiles (msec): 00:21:16.736 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:21:16.736 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:21:16.736 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 110], 00:21:16.736 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 136], 99.95th=[ 144], 00:21:16.736 | 99.99th=[ 157] 00:21:16.736 bw ( KiB/s): min= 616, max= 1089, per=4.20%, avg=870.85, stdev=122.29, samples=20 00:21:16.736 iops : min= 154, max= 272, avg=217.65, stdev=30.55, samples=20 00:21:16.736 lat (msec) : 50=19.46%, 100=68.20%, 250=12.33% 00:21:16.736 cpu : usr=31.03%, sys=1.91%, ctx=908, majf=0, minf=9 00:21:16.736 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.7%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:16.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.736 filename1: (groupid=0, jobs=1): err= 0: pid=83422: Mon Jul 15 17:10:05 2024 00:21:16.736 read: IOPS=220, BW=881KiB/s (902kB/s)(8820KiB/10010msec) 00:21:16.736 slat (usec): min=8, max=8029, avg=22.48, stdev=241.30 00:21:16.736 clat (msec): min=15, max=131, avg=72.53, stdev=20.44 00:21:16.736 lat (msec): min=15, max=131, avg=72.55, stdev=20.44 00:21:16.736 clat percentiles (msec): 00:21:16.736 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:21:16.736 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:21:16.736 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 105], 95.00th=[ 111], 00:21:16.736 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 131], 00:21:16.736 | 99.99th=[ 131] 00:21:16.736 bw ( KiB/s): min= 664, max= 1048, per=4.24%, avg=879.05, stdev=121.64, samples=20 00:21:16.736 iops : min= 166, max= 262, avg=219.70, stdev=30.44, samples=20 00:21:16.736 lat (msec) : 20=0.32%, 50=17.05%, 100=70.25%, 250=12.38% 00:21:16.736 cpu : usr=31.68%, sys=1.94%, ctx=1086, majf=0, minf=9 00:21:16.736 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:16.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 issued rwts: total=2205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.736 filename1: (groupid=0, jobs=1): err= 0: pid=83423: Mon Jul 15 17:10:05 2024 00:21:16.736 read: IOPS=216, BW=868KiB/s (888kB/s)(8696KiB/10024msec) 00:21:16.736 slat (usec): min=4, max=4030, avg=16.52, stdev=86.30 00:21:16.736 clat (msec): min=23, max=148, avg=73.67, stdev=20.94 00:21:16.736 lat (msec): min=24, max=148, avg=73.69, stdev=20.94 00:21:16.736 clat percentiles (msec): 00:21:16.736 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:21:16.736 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:21:16.736 | 70.00th=[ 81], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 113], 00:21:16.736 | 99.00th=[ 126], 99.50th=[ 127], 99.90th=[ 138], 99.95th=[ 138], 00:21:16.736 | 99.99th=[ 148] 00:21:16.736 bw ( KiB/s): min= 656, max= 1024, per=4.16%, avg=862.55, stdev=120.55, samples=20 00:21:16.736 iops : min= 164, max= 256, avg=215.55, stdev=30.16, samples=20 00:21:16.736 lat (msec) : 50=14.40%, 100=71.30%, 250=14.31% 00:21:16.736 cpu : usr=40.53%, sys=2.42%, ctx=1339, majf=0, minf=9 00:21:16.736 IO depths : 1=0.1%, 2=0.7%, 4=3.0%, 8=80.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:16.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.736 filename2: (groupid=0, jobs=1): err= 0: pid=83424: Mon Jul 15 17:10:05 2024 00:21:16.736 read: IOPS=229, BW=917KiB/s (939kB/s)(9176KiB/10005msec) 00:21:16.736 slat (usec): min=8, max=8027, avg=19.99, stdev=189.45 00:21:16.736 clat (usec): min=1949, max=153585, avg=69670.56, stdev=22659.98 00:21:16.736 lat (usec): min=1958, max=153608, avg=69690.55, stdev=22658.88 00:21:16.736 clat percentiles (msec): 00:21:16.736 | 1.00th=[ 6], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 49], 00:21:16.736 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:21:16.736 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 110], 00:21:16.736 | 99.00th=[ 122], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 155], 00:21:16.736 | 99.99th=[ 155] 00:21:16.736 bw ( KiB/s): min= 664, max= 1024, per=4.31%, avg=894.63, stdev=113.80, samples=19 00:21:16.736 iops : min= 166, max= 256, avg=223.63, stdev=28.46, samples=19 00:21:16.736 lat (msec) : 2=0.13%, 10=1.66%, 20=0.31%, 50=21.01%, 100=65.95% 00:21:16.736 lat (msec) : 250=10.94% 00:21:16.736 cpu : usr=37.49%, sys=2.32%, ctx=1205, majf=0, minf=9 00:21:16.736 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:16.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.736 filename2: (groupid=0, jobs=1): err= 0: pid=83425: Mon Jul 15 17:10:05 2024 00:21:16.736 read: IOPS=225, BW=901KiB/s (923kB/s)(9016KiB/10006msec) 00:21:16.736 slat (usec): min=4, max=12039, avg=24.12, stdev=304.37 00:21:16.736 clat (msec): min=6, max=160, avg=70.90, stdev=21.56 00:21:16.736 lat (msec): min=6, max=160, avg=70.93, stdev=21.56 00:21:16.736 clat percentiles (msec): 00:21:16.736 | 1.00th=[ 28], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 50], 00:21:16.736 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:21:16.736 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 109], 00:21:16.736 | 99.00th=[ 120], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 161], 00:21:16.736 | 99.99th=[ 161] 00:21:16.736 bw ( KiB/s): min= 664, max= 1048, per=4.28%, avg=888.74, stdev=120.33, samples=19 00:21:16.736 iops : min= 166, max= 262, avg=222.16, stdev=30.10, samples=19 00:21:16.736 lat (msec) : 10=0.44%, 20=0.27%, 50=20.59%, 100=67.30%, 250=11.40% 00:21:16.736 cpu : usr=31.15%, sys=1.82%, ctx=893, majf=0, minf=9 00:21:16.736 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:16.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.736 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.736 filename2: (groupid=0, jobs=1): err= 0: pid=83426: Mon Jul 15 17:10:05 2024 00:21:16.736 read: IOPS=203, BW=814KiB/s (833kB/s)(8172KiB/10040msec) 00:21:16.736 slat (usec): min=4, max=8032, avg=33.58, stdev=395.92 00:21:16.736 clat (msec): min=19, max=144, avg=78.42, stdev=20.43 00:21:16.736 lat (msec): min=19, max=144, avg=78.46, stdev=20.43 00:21:16.736 clat percentiles (msec): 00:21:16.736 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 63], 00:21:16.737 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:21:16.737 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:21:16.737 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:21:16.737 | 99.99th=[ 144] 00:21:16.737 bw ( KiB/s): min= 640, max= 944, per=3.91%, avg=810.70, stdev=95.36, samples=20 00:21:16.737 iops : min= 160, max= 236, avg=202.65, stdev=23.85, samples=20 00:21:16.737 lat (msec) : 20=0.78%, 50=8.57%, 100=74.69%, 250=15.96% 00:21:16.737 cpu : usr=31.18%, sys=2.03%, ctx=853, majf=0, minf=9 00:21:16.737 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=76.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:16.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 complete : 0=0.0%, 4=89.4%, 8=9.5%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.737 filename2: (groupid=0, jobs=1): err= 0: pid=83427: Mon Jul 15 17:10:05 2024 00:21:16.737 read: IOPS=222, BW=892KiB/s (913kB/s)(8924KiB/10010msec) 00:21:16.737 slat (usec): min=4, max=11025, avg=32.09, stdev=357.50 00:21:16.737 clat (msec): min=18, max=136, avg=71.62, stdev=20.12 00:21:16.737 lat (msec): min=18, max=136, avg=71.66, stdev=20.13 00:21:16.737 clat percentiles (msec): 00:21:16.737 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:21:16.737 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 72], 00:21:16.737 | 70.00th=[ 80], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 109], 00:21:16.737 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 130], 99.95th=[ 130], 00:21:16.737 | 99.99th=[ 136] 00:21:16.737 bw ( KiB/s): min= 656, max= 1024, per=4.28%, avg=888.25, stdev=127.70, samples=20 00:21:16.737 iops : min= 164, max= 256, avg=222.00, stdev=31.94, samples=20 00:21:16.737 lat (msec) : 20=0.27%, 50=19.05%, 100=69.07%, 250=11.61% 00:21:16.737 cpu : usr=32.26%, sys=2.04%, ctx=1054, majf=0, minf=9 00:21:16.737 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:16.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 complete : 0=0.0%, 4=87.7%, 8=11.4%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 issued rwts: total=2231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.737 filename2: (groupid=0, jobs=1): err= 0: pid=83428: Mon Jul 15 17:10:05 2024 00:21:16.737 read: IOPS=223, BW=896KiB/s (917kB/s)(8964KiB/10010msec) 00:21:16.737 slat (usec): min=5, max=8024, avg=26.78, stdev=232.82 00:21:16.737 clat (msec): min=10, max=135, avg=71.35, stdev=20.66 00:21:16.737 lat (msec): min=10, max=135, avg=71.38, stdev=20.66 00:21:16.737 clat percentiles (msec): 00:21:16.737 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 51], 00:21:16.737 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 73], 00:21:16.737 | 70.00th=[ 80], 80.00th=[ 89], 90.00th=[ 105], 95.00th=[ 110], 00:21:16.737 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 136], 00:21:16.737 | 99.99th=[ 136] 00:21:16.737 bw ( KiB/s): min= 664, max= 1072, per=4.30%, avg=891.40, stdev=131.70, samples=20 00:21:16.737 iops : min= 166, max= 268, avg=222.80, stdev=32.94, samples=20 00:21:16.737 lat (msec) : 20=0.54%, 50=18.47%, 100=68.09%, 250=12.90% 00:21:16.737 cpu : usr=40.53%, sys=2.27%, ctx=1350, majf=0, minf=9 00:21:16.737 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:16.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 complete : 0=0.0%, 4=87.5%, 8=11.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 issued rwts: total=2241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.737 filename2: (groupid=0, jobs=1): err= 0: pid=83429: Mon Jul 15 17:10:05 2024 00:21:16.737 read: IOPS=211, BW=847KiB/s (867kB/s)(8504KiB/10040msec) 00:21:16.737 slat (usec): min=4, max=8038, avg=27.71, stdev=313.39 00:21:16.737 clat (msec): min=19, max=155, avg=75.41, stdev=21.12 00:21:16.737 lat (msec): min=19, max=155, avg=75.44, stdev=21.12 00:21:16.737 clat percentiles (msec): 00:21:16.737 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:21:16.737 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:21:16.737 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 110], 00:21:16.737 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 144], 99.95th=[ 146], 00:21:16.737 | 99.99th=[ 157] 00:21:16.737 bw ( KiB/s): min= 632, max= 1024, per=4.07%, avg=843.90, stdev=122.22, samples=20 00:21:16.737 iops : min= 158, max= 256, avg=210.95, stdev=30.58, samples=20 00:21:16.737 lat (msec) : 20=0.66%, 50=14.11%, 100=71.26%, 250=13.97% 00:21:16.737 cpu : usr=34.65%, sys=2.09%, ctx=1009, majf=0, minf=9 00:21:16.737 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=78.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:16.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.737 filename2: (groupid=0, jobs=1): err= 0: pid=83430: Mon Jul 15 17:10:05 2024 00:21:16.737 read: IOPS=231, BW=927KiB/s (949kB/s)(9272KiB/10005msec) 00:21:16.737 slat (usec): min=4, max=8027, avg=21.22, stdev=235.23 00:21:16.737 clat (usec): min=1910, max=155849, avg=68964.25, stdev=22768.17 00:21:16.737 lat (usec): min=1919, max=155865, avg=68985.47, stdev=22766.57 00:21:16.737 clat percentiles (msec): 00:21:16.737 | 1.00th=[ 7], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 48], 00:21:16.737 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:21:16.737 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 109], 00:21:16.737 | 99.00th=[ 121], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 157], 00:21:16.737 | 99.99th=[ 157] 00:21:16.737 bw ( KiB/s): min= 664, max= 1048, per=4.34%, avg=900.11, stdev=117.55, samples=19 00:21:16.737 iops : min= 166, max= 262, avg=225.00, stdev=29.40, samples=19 00:21:16.737 lat (msec) : 2=0.17%, 4=0.13%, 10=1.51%, 20=0.26%, 50=24.55% 00:21:16.737 lat (msec) : 100=63.29%, 250=10.09% 00:21:16.737 cpu : usr=31.49%, sys=1.77%, ctx=852, majf=0, minf=9 00:21:16.737 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:16.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.737 filename2: (groupid=0, jobs=1): err= 0: pid=83431: Mon Jul 15 17:10:05 2024 00:21:16.737 read: IOPS=218, BW=874KiB/s (894kB/s)(8736KiB/10001msec) 00:21:16.737 slat (usec): min=7, max=4049, avg=22.03, stdev=171.84 00:21:16.737 clat (usec): min=1856, max=155811, avg=73152.74, stdev=22396.30 00:21:16.737 lat (usec): min=1864, max=155829, avg=73174.77, stdev=22393.76 00:21:16.737 clat percentiles (msec): 00:21:16.737 | 1.00th=[ 5], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 57], 00:21:16.737 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:21:16.737 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 109], 00:21:16.737 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 131], 99.95th=[ 157], 00:21:16.737 | 99.99th=[ 157] 00:21:16.737 bw ( KiB/s): min= 664, max= 976, per=4.04%, avg=838.63, stdev=103.55, samples=19 00:21:16.737 iops : min= 166, max= 244, avg=209.63, stdev=25.87, samples=19 00:21:16.737 lat (msec) : 2=0.46%, 4=0.37%, 10=1.51%, 20=0.41%, 50=11.40% 00:21:16.737 lat (msec) : 100=72.57%, 250=13.28% 00:21:16.737 cpu : usr=39.63%, sys=2.45%, ctx=1263, majf=0, minf=9 00:21:16.737 IO depths : 1=0.1%, 2=2.2%, 4=8.7%, 8=74.4%, 16=14.6%, 32=0.0%, >=64=0.0% 00:21:16.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 complete : 0=0.0%, 4=89.3%, 8=8.7%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.737 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.737 00:21:16.737 Run status group 0 (all jobs): 00:21:16.737 READ: bw=20.2MiB/s (21.2MB/s), 793KiB/s-927KiB/s (812kB/s-949kB/s), io=203MiB (213MB), run=10001-10048msec 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.737 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.738 bdev_null0 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.738 [2024-07-15 17:10:05.410696] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.738 bdev_null1 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.738 { 00:21:16.738 "params": { 00:21:16.738 "name": "Nvme$subsystem", 00:21:16.738 "trtype": "$TEST_TRANSPORT", 00:21:16.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.738 "adrfam": "ipv4", 00:21:16.738 "trsvcid": "$NVMF_PORT", 00:21:16.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.738 "hdgst": ${hdgst:-false}, 00:21:16.738 "ddgst": ${ddgst:-false} 00:21:16.738 }, 00:21:16.738 "method": "bdev_nvme_attach_controller" 00:21:16.738 } 00:21:16.738 EOF 00:21:16.738 )") 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.738 { 00:21:16.738 "params": { 00:21:16.738 "name": "Nvme$subsystem", 00:21:16.738 "trtype": "$TEST_TRANSPORT", 00:21:16.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.738 "adrfam": "ipv4", 00:21:16.738 "trsvcid": "$NVMF_PORT", 00:21:16.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.738 "hdgst": ${hdgst:-false}, 00:21:16.738 "ddgst": ${ddgst:-false} 00:21:16.738 }, 00:21:16.738 "method": "bdev_nvme_attach_controller" 00:21:16.738 } 00:21:16.738 EOF 00:21:16.738 )") 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:16.738 "params": { 00:21:16.738 "name": "Nvme0", 00:21:16.738 "trtype": "tcp", 00:21:16.738 "traddr": "10.0.0.2", 00:21:16.738 "adrfam": "ipv4", 00:21:16.738 "trsvcid": "4420", 00:21:16.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:16.738 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:16.738 "hdgst": false, 00:21:16.738 "ddgst": false 00:21:16.738 }, 00:21:16.738 "method": "bdev_nvme_attach_controller" 00:21:16.738 },{ 00:21:16.738 "params": { 00:21:16.738 "name": "Nvme1", 00:21:16.738 "trtype": "tcp", 00:21:16.738 "traddr": "10.0.0.2", 00:21:16.738 "adrfam": "ipv4", 00:21:16.738 "trsvcid": "4420", 00:21:16.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.738 "hdgst": false, 00:21:16.738 "ddgst": false 00:21:16.738 }, 00:21:16.738 "method": "bdev_nvme_attach_controller" 00:21:16.738 }' 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:16.738 17:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:16.738 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:16.738 ... 00:21:16.738 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:16.738 ... 00:21:16.738 fio-3.35 00:21:16.738 Starting 4 threads 00:21:22.006 00:21:22.006 filename0: (groupid=0, jobs=1): err= 0: pid=83567: Mon Jul 15 17:10:11 2024 00:21:22.006 read: IOPS=2120, BW=16.6MiB/s (17.4MB/s)(82.9MiB/5003msec) 00:21:22.006 slat (usec): min=7, max=293, avg=12.47, stdev= 5.60 00:21:22.006 clat (usec): min=639, max=7036, avg=3731.98, stdev=860.09 00:21:22.006 lat (usec): min=648, max=7049, avg=3744.46, stdev=860.92 00:21:22.006 clat percentiles (usec): 00:21:22.006 | 1.00th=[ 1385], 5.00th=[ 1467], 10.00th=[ 2606], 20.00th=[ 3294], 00:21:22.006 | 30.00th=[ 3589], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:21:22.006 | 70.00th=[ 4047], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4883], 00:21:22.006 | 99.00th=[ 5211], 99.50th=[ 5866], 99.90th=[ 6194], 99.95th=[ 6849], 00:21:22.006 | 99.99th=[ 7046] 00:21:22.006 bw ( KiB/s): min=14124, max=18832, per=25.62%, avg=17004.00, stdev=1711.60, samples=9 00:21:22.006 iops : min= 1765, max= 2354, avg=2125.44, stdev=214.06, samples=9 00:21:22.006 lat (usec) : 750=0.09%, 1000=0.08% 00:21:22.006 lat (msec) : 2=5.99%, 4=61.43%, 10=32.42% 00:21:22.006 cpu : usr=91.26%, sys=7.58%, ctx=70, majf=0, minf=9 00:21:22.006 IO depths : 1=0.1%, 2=12.1%, 4=59.2%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:22.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.006 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.006 issued rwts: total=10609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.006 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:22.006 filename0: (groupid=0, jobs=1): err= 0: pid=83568: Mon Jul 15 17:10:11 2024 00:21:22.006 read: IOPS=2133, BW=16.7MiB/s (17.5MB/s)(83.4MiB/5002msec) 00:21:22.006 slat (nsec): min=7864, max=43166, avg=14697.55, stdev=2979.92 00:21:22.006 clat (usec): min=1213, max=6317, avg=3704.53, stdev=751.76 00:21:22.006 lat (usec): min=1240, max=6345, avg=3719.22, stdev=751.99 00:21:22.006 clat percentiles (usec): 00:21:22.006 | 1.00th=[ 1614], 5.00th=[ 2212], 10.00th=[ 2540], 20.00th=[ 3261], 00:21:22.006 | 30.00th=[ 3392], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3884], 00:21:22.006 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 4752], 00:21:22.006 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5407], 99.95th=[ 6063], 00:21:22.006 | 99.99th=[ 6128] 00:21:22.006 bw ( KiB/s): min=15872, max=18848, per=25.76%, avg=17095.11, stdev=1227.63, samples=9 00:21:22.006 iops : min= 1984, max= 2356, avg=2136.89, stdev=153.45, samples=9 00:21:22.006 lat (msec) : 2=1.55%, 4=65.63%, 10=32.82% 00:21:22.006 cpu : usr=92.06%, sys=7.12%, ctx=12, majf=0, minf=0 00:21:22.006 IO depths : 1=0.1%, 2=11.7%, 4=59.4%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:22.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.006 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.006 issued rwts: total=10672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.006 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:22.006 filename1: (groupid=0, jobs=1): err= 0: pid=83569: Mon Jul 15 17:10:11 2024 00:21:22.006 read: IOPS=2133, BW=16.7MiB/s (17.5MB/s)(83.4MiB/5001msec) 00:21:22.006 slat (nsec): min=7581, max=70947, avg=15216.59, stdev=3503.59 00:21:22.006 clat (usec): min=1223, max=5951, avg=3700.36, stdev=749.71 00:21:22.006 lat (usec): min=1248, max=5966, avg=3715.58, stdev=749.35 00:21:22.006 clat percentiles (usec): 00:21:22.006 | 1.00th=[ 1598], 5.00th=[ 2212], 10.00th=[ 2540], 20.00th=[ 3261], 00:21:22.006 | 30.00th=[ 3392], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3884], 00:21:22.006 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 4752], 00:21:22.006 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5342], 99.95th=[ 5407], 00:21:22.006 | 99.99th=[ 5407] 00:21:22.006 bw ( KiB/s): min=15872, max=18885, per=25.77%, avg=17099.22, stdev=1234.28, samples=9 00:21:22.006 iops : min= 1984, max= 2360, avg=2137.33, stdev=154.17, samples=9 00:21:22.006 lat (msec) : 2=1.56%, 4=65.64%, 10=32.81% 00:21:22.006 cpu : usr=91.92%, sys=7.18%, ctx=3, majf=0, minf=0 00:21:22.006 IO depths : 1=0.1%, 2=11.7%, 4=59.4%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:22.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.006 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.006 issued rwts: total=10672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.006 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:22.006 filename1: (groupid=0, jobs=1): err= 0: pid=83570: Mon Jul 15 17:10:11 2024 00:21:22.006 read: IOPS=1908, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5002msec) 00:21:22.006 slat (nsec): min=7670, max=40527, avg=14528.17, stdev=3385.00 00:21:22.006 clat (usec): min=989, max=6987, avg=4139.20, stdev=779.34 00:21:22.006 lat (usec): min=1003, max=7002, avg=4153.73, stdev=779.41 00:21:22.006 clat percentiles (usec): 00:21:22.006 | 1.00th=[ 1958], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3785], 00:21:22.006 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3916], 60.00th=[ 4178], 00:21:22.006 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 5080], 95.00th=[ 5932], 00:21:22.006 | 99.00th=[ 6128], 99.50th=[ 6194], 99.90th=[ 6456], 99.95th=[ 6652], 00:21:22.006 | 99.99th=[ 6980] 00:21:22.006 bw ( KiB/s): min=11664, max=16896, per=22.76%, avg=15102.22, stdev=1661.06, samples=9 00:21:22.006 iops : min= 1458, max= 2112, avg=1887.78, stdev=207.63, samples=9 00:21:22.006 lat (usec) : 1000=0.01% 00:21:22.006 lat (msec) : 2=1.07%, 4=51.40%, 10=47.52% 00:21:22.006 cpu : usr=91.76%, sys=7.48%, ctx=9, majf=0, minf=9 00:21:22.006 IO depths : 1=0.1%, 2=18.9%, 4=54.6%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:22.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.006 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.006 issued rwts: total=9546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.006 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:22.006 00:21:22.006 Run status group 0 (all jobs): 00:21:22.006 READ: bw=64.8MiB/s (68.0MB/s), 14.9MiB/s-16.7MiB/s (15.6MB/s-17.5MB/s), io=324MiB (340MB), run=5001-5003msec 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.006 ************************************ 00:21:22.006 END TEST fio_dif_rand_params 00:21:22.006 ************************************ 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.006 00:21:22.006 real 0m23.536s 00:21:22.006 user 2m2.946s 00:21:22.006 sys 0m8.952s 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:22.006 17:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.006 17:10:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:22.006 17:10:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:22.006 17:10:11 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:22.006 17:10:11 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.006 17:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:22.006 ************************************ 00:21:22.006 START TEST fio_dif_digest 00:21:22.006 ************************************ 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:22.006 bdev_null0 00:21:22.006 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:22.007 [2024-07-15 17:10:11.558124] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.007 { 00:21:22.007 "params": { 00:21:22.007 "name": "Nvme$subsystem", 00:21:22.007 "trtype": "$TEST_TRANSPORT", 00:21:22.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.007 "adrfam": "ipv4", 00:21:22.007 "trsvcid": "$NVMF_PORT", 00:21:22.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.007 "hdgst": ${hdgst:-false}, 00:21:22.007 "ddgst": ${ddgst:-false} 00:21:22.007 }, 00:21:22.007 "method": "bdev_nvme_attach_controller" 00:21:22.007 } 00:21:22.007 EOF 00:21:22.007 )") 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:22.007 "params": { 00:21:22.007 "name": "Nvme0", 00:21:22.007 "trtype": "tcp", 00:21:22.007 "traddr": "10.0.0.2", 00:21:22.007 "adrfam": "ipv4", 00:21:22.007 "trsvcid": "4420", 00:21:22.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:22.007 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:22.007 "hdgst": true, 00:21:22.007 "ddgst": true 00:21:22.007 }, 00:21:22.007 "method": "bdev_nvme_attach_controller" 00:21:22.007 }' 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:22.007 17:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:22.007 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:22.007 ... 00:21:22.007 fio-3.35 00:21:22.007 Starting 3 threads 00:21:32.055 00:21:32.055 filename0: (groupid=0, jobs=1): err= 0: pid=83676: Mon Jul 15 17:10:22 2024 00:21:32.055 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(284MiB/10011msec) 00:21:32.055 slat (usec): min=7, max=146, avg=13.39, stdev= 6.89 00:21:32.055 clat (usec): min=11990, max=16294, avg=13172.73, stdev=164.49 00:21:32.055 lat (usec): min=11998, max=16327, avg=13186.12, stdev=164.87 00:21:32.055 clat percentiles (usec): 00:21:32.055 | 1.00th=[12780], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:21:32.055 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:32.055 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13304], 00:21:32.055 | 99.00th=[13698], 99.50th=[13698], 99.90th=[16319], 99.95th=[16319], 00:21:32.055 | 99.99th=[16319] 00:21:32.055 bw ( KiB/s): min=28416, max=29184, per=33.33%, avg=29068.80, stdev=281.35, samples=20 00:21:32.055 iops : min= 222, max= 228, avg=227.10, stdev= 2.20, samples=20 00:21:32.055 lat (msec) : 20=100.00% 00:21:32.055 cpu : usr=90.63%, sys=8.39%, ctx=109, majf=0, minf=0 00:21:32.055 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:32.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:32.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:32.055 issued rwts: total=2274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:32.055 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:32.055 filename0: (groupid=0, jobs=1): err= 0: pid=83677: Mon Jul 15 17:10:22 2024 00:21:32.055 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(284MiB/10009msec) 00:21:32.055 slat (nsec): min=7628, max=57446, avg=13093.74, stdev=6351.35 00:21:32.055 clat (usec): min=12598, max=14608, avg=13171.19, stdev=109.30 00:21:32.055 lat (usec): min=12606, max=14635, avg=13184.28, stdev=109.90 00:21:32.055 clat percentiles (usec): 00:21:32.055 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:21:32.055 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:32.055 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13304], 00:21:32.055 | 99.00th=[13566], 99.50th=[13698], 99.90th=[14615], 99.95th=[14615], 00:21:32.055 | 99.99th=[14615] 00:21:32.055 bw ( KiB/s): min=28416, max=29184, per=33.33%, avg=29068.80, stdev=281.35, samples=20 00:21:32.055 iops : min= 222, max= 228, avg=227.10, stdev= 2.20, samples=20 00:21:32.055 lat (msec) : 20=100.00% 00:21:32.055 cpu : usr=91.69%, sys=7.76%, ctx=112, majf=0, minf=0 00:21:32.055 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:32.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:32.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:32.055 issued rwts: total=2274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:32.055 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:32.055 filename0: (groupid=0, jobs=1): err= 0: pid=83678: Mon Jul 15 17:10:22 2024 00:21:32.055 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(284MiB/10008msec) 00:21:32.055 slat (nsec): min=7946, max=57531, avg=11981.44, stdev=5334.44 00:21:32.055 clat (usec): min=9856, max=16552, avg=13171.61, stdev=197.48 00:21:32.055 lat (usec): min=9864, max=16589, avg=13183.59, stdev=197.71 00:21:32.055 clat percentiles (usec): 00:21:32.055 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:21:32.055 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:32.055 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13304], 00:21:32.055 | 99.00th=[13566], 99.50th=[13698], 99.90th=[16581], 99.95th=[16581], 00:21:32.055 | 99.99th=[16581] 00:21:32.055 bw ( KiB/s): min=28416, max=29184, per=33.33%, avg=29068.80, stdev=281.35, samples=20 00:21:32.055 iops : min= 222, max= 228, avg=227.10, stdev= 2.20, samples=20 00:21:32.055 lat (msec) : 10=0.13%, 20=99.87% 00:21:32.055 cpu : usr=90.78%, sys=8.58%, ctx=7, majf=0, minf=9 00:21:32.055 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:32.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:32.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:32.055 issued rwts: total=2274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:32.055 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:32.055 00:21:32.055 Run status group 0 (all jobs): 00:21:32.055 READ: bw=85.2MiB/s (89.3MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=853MiB (894MB), run=10008-10011msec 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:32.324 ************************************ 00:21:32.324 END TEST fio_dif_digest 00:21:32.324 ************************************ 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.324 00:21:32.324 real 0m10.975s 00:21:32.324 user 0m27.938s 00:21:32.324 sys 0m2.734s 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:32.324 17:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:32.324 17:10:22 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:32.324 17:10:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:32.324 17:10:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:32.324 17:10:22 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:32.324 17:10:22 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:32.324 17:10:22 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:32.324 17:10:22 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:32.324 17:10:22 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:32.324 17:10:22 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:32.324 rmmod nvme_tcp 00:21:32.324 rmmod nvme_fabrics 00:21:32.324 rmmod nvme_keyring 00:21:32.583 17:10:22 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:32.583 17:10:22 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:32.583 17:10:22 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:32.583 17:10:22 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 82929 ']' 00:21:32.583 17:10:22 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 82929 00:21:32.583 17:10:22 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 82929 ']' 00:21:32.583 17:10:22 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 82929 00:21:32.583 17:10:22 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:21:32.583 17:10:22 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.583 17:10:22 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82929 00:21:32.583 killing process with pid 82929 00:21:32.583 17:10:22 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:32.583 17:10:22 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:32.583 17:10:22 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82929' 00:21:32.583 17:10:22 nvmf_dif -- common/autotest_common.sh@967 -- # kill 82929 00:21:32.583 17:10:22 nvmf_dif -- common/autotest_common.sh@972 -- # wait 82929 00:21:32.863 17:10:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:32.863 17:10:22 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:33.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:33.125 Waiting for block devices as requested 00:21:33.125 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:33.125 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:33.384 17:10:23 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.384 17:10:23 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.384 17:10:23 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.384 17:10:23 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.384 17:10:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.384 17:10:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:33.384 17:10:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.384 17:10:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:33.384 00:21:33.384 real 0m59.619s 00:21:33.384 user 3m47.123s 00:21:33.384 sys 0m19.924s 00:21:33.384 17:10:23 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.384 17:10:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:33.384 ************************************ 00:21:33.384 END TEST nvmf_dif 00:21:33.384 ************************************ 00:21:33.384 17:10:23 -- common/autotest_common.sh@1142 -- # return 0 00:21:33.384 17:10:23 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:33.384 17:10:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:33.384 17:10:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.384 17:10:23 -- common/autotest_common.sh@10 -- # set +x 00:21:33.384 ************************************ 00:21:33.384 START TEST nvmf_abort_qd_sizes 00:21:33.384 ************************************ 00:21:33.384 17:10:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:33.384 * Looking for test storage... 00:21:33.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:33.384 17:10:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:33.384 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:33.384 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.384 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.384 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.384 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.384 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.384 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.384 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:33.385 Cannot find device "nvmf_tgt_br" 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:33.385 Cannot find device "nvmf_tgt_br2" 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:33.385 Cannot find device "nvmf_tgt_br" 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:33.385 Cannot find device "nvmf_tgt_br2" 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:33.385 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:33.643 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:33.643 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:33.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.643 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:33.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:33.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:21:33.644 00:21:33.644 --- 10.0.0.2 ping statistics --- 00:21:33.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.644 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:33.644 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:33.644 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:21:33.644 00:21:33.644 --- 10.0.0.3 ping statistics --- 00:21:33.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.644 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:33.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:33.644 00:21:33.644 --- 10.0.0.1 ping statistics --- 00:21:33.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.644 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:33.644 17:10:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:34.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:34.577 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:34.577 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84272 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84272 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84272 ']' 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.577 17:10:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:34.836 [2024-07-15 17:10:24.882309] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:34.836 [2024-07-15 17:10:24.882408] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.836 [2024-07-15 17:10:25.019497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.095 [2024-07-15 17:10:25.166001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.095 [2024-07-15 17:10:25.166092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.095 [2024-07-15 17:10:25.166103] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.095 [2024-07-15 17:10:25.166112] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.095 [2024-07-15 17:10:25.166119] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.095 [2024-07-15 17:10:25.166270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.095 [2024-07-15 17:10:25.166482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.095 [2024-07-15 17:10:25.167040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.095 [2024-07-15 17:10:25.167074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.095 [2024-07-15 17:10:25.220097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.707 17:10:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:35.707 ************************************ 00:21:35.707 START TEST spdk_target_abort 00:21:35.707 ************************************ 00:21:35.707 17:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:21:35.707 17:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:35.707 17:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:35.707 17:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.707 17:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.707 spdk_targetn1 00:21:35.707 17:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.707 17:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.707 17:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.707 17:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.707 [2024-07-15 17:10:26.001515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.966 [2024-07-15 17:10:26.029811] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:35.966 17:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:39.248 Initializing NVMe Controllers 00:21:39.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:39.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:39.248 Initialization complete. Launching workers. 00:21:39.248 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10638, failed: 0 00:21:39.248 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1022, failed to submit 9616 00:21:39.248 success 851, unsuccess 171, failed 0 00:21:39.248 17:10:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:39.248 17:10:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:42.530 Initializing NVMe Controllers 00:21:42.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:42.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:42.530 Initialization complete. Launching workers. 00:21:42.530 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8897, failed: 0 00:21:42.530 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1173, failed to submit 7724 00:21:42.530 success 394, unsuccess 779, failed 0 00:21:42.530 17:10:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:42.530 17:10:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:45.812 Initializing NVMe Controllers 00:21:45.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:45.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:45.812 Initialization complete. Launching workers. 00:21:45.812 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30323, failed: 0 00:21:45.812 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2304, failed to submit 28019 00:21:45.812 success 409, unsuccess 1895, failed 0 00:21:45.812 17:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:45.812 17:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.812 17:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:45.812 17:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.812 17:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:45.812 17:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.812 17:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:46.072 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.072 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84272 00:21:46.072 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84272 ']' 00:21:46.072 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84272 00:21:46.072 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:21:46.333 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.333 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84272 00:21:46.333 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:46.333 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:46.333 killing process with pid 84272 00:21:46.333 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84272' 00:21:46.333 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84272 00:21:46.333 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84272 00:21:46.592 00:21:46.592 real 0m10.856s 00:21:46.592 user 0m42.265s 00:21:46.592 sys 0m2.131s 00:21:46.592 ************************************ 00:21:46.592 END TEST spdk_target_abort 00:21:46.592 ************************************ 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:46.592 17:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:46.592 17:10:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:46.592 17:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:46.592 17:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.592 17:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:46.592 ************************************ 00:21:46.592 START TEST kernel_target_abort 00:21:46.592 ************************************ 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:46.592 17:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:46.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:46.904 Waiting for block devices as requested 00:21:47.163 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:47.163 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:47.163 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:47.163 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:47.163 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:47.163 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:47.163 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:47.163 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:47.163 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:47.163 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:47.163 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:47.163 No valid GPT data, bailing 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:47.422 No valid GPT data, bailing 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:47.422 No valid GPT data, bailing 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:47.422 No valid GPT data, bailing 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:47.422 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:47.423 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da --hostid=0b4e8503-7bac-4879-926a-209303c4b3da -a 10.0.0.1 -t tcp -s 4420 00:21:47.682 00:21:47.682 Discovery Log Number of Records 2, Generation counter 2 00:21:47.682 =====Discovery Log Entry 0====== 00:21:47.682 trtype: tcp 00:21:47.682 adrfam: ipv4 00:21:47.682 subtype: current discovery subsystem 00:21:47.682 treq: not specified, sq flow control disable supported 00:21:47.682 portid: 1 00:21:47.682 trsvcid: 4420 00:21:47.682 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:47.682 traddr: 10.0.0.1 00:21:47.682 eflags: none 00:21:47.682 sectype: none 00:21:47.682 =====Discovery Log Entry 1====== 00:21:47.682 trtype: tcp 00:21:47.682 adrfam: ipv4 00:21:47.682 subtype: nvme subsystem 00:21:47.682 treq: not specified, sq flow control disable supported 00:21:47.682 portid: 1 00:21:47.682 trsvcid: 4420 00:21:47.682 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:47.682 traddr: 10.0.0.1 00:21:47.682 eflags: none 00:21:47.682 sectype: none 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:47.682 17:10:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:50.968 Initializing NVMe Controllers 00:21:50.968 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:50.968 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:50.968 Initialization complete. Launching workers. 00:21:50.968 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34422, failed: 0 00:21:50.968 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34422, failed to submit 0 00:21:50.968 success 0, unsuccess 34422, failed 0 00:21:50.968 17:10:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:50.968 17:10:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:54.255 Initializing NVMe Controllers 00:21:54.255 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:54.255 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:54.255 Initialization complete. Launching workers. 00:21:54.255 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71998, failed: 0 00:21:54.255 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31233, failed to submit 40765 00:21:54.255 success 0, unsuccess 31233, failed 0 00:21:54.255 17:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:54.255 17:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:57.604 Initializing NVMe Controllers 00:21:57.604 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:57.604 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:57.604 Initialization complete. Launching workers. 00:21:57.604 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85456, failed: 0 00:21:57.604 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21342, failed to submit 64114 00:21:57.604 success 0, unsuccess 21342, failed 0 00:21:57.604 17:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:57.604 17:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:57.604 17:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:57.604 17:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:57.604 17:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:57.604 17:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:57.604 17:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:57.604 17:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:57.604 17:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:57.604 17:10:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:57.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:00.467 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:00.467 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:00.467 00:22:00.467 real 0m13.448s 00:22:00.467 user 0m6.143s 00:22:00.467 sys 0m4.574s 00:22:00.467 17:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.467 17:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:00.467 ************************************ 00:22:00.467 END TEST kernel_target_abort 00:22:00.467 ************************************ 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.467 rmmod nvme_tcp 00:22:00.467 rmmod nvme_fabrics 00:22:00.467 rmmod nvme_keyring 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84272 ']' 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84272 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84272 ']' 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84272 00:22:00.467 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84272) - No such process 00:22:00.467 Process with pid 84272 is not found 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84272 is not found' 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:22:00.467 17:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:00.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:00.726 Waiting for block devices as requested 00:22:00.726 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.726 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.985 17:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:00.985 17:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:00.985 17:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.985 17:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.985 17:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.985 17:10:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:00.985 17:10:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.985 17:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:00.985 00:22:00.985 real 0m27.587s 00:22:00.985 user 0m49.559s 00:22:00.985 sys 0m8.001s 00:22:00.985 17:10:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.985 ************************************ 00:22:00.985 END TEST nvmf_abort_qd_sizes 00:22:00.985 ************************************ 00:22:00.985 17:10:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:00.985 17:10:51 -- common/autotest_common.sh@1142 -- # return 0 00:22:00.985 17:10:51 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:00.985 17:10:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:00.985 17:10:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.985 17:10:51 -- common/autotest_common.sh@10 -- # set +x 00:22:00.985 ************************************ 00:22:00.985 START TEST keyring_file 00:22:00.985 ************************************ 00:22:00.985 17:10:51 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:00.985 * Looking for test storage... 00:22:00.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:00.985 17:10:51 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:00.985 17:10:51 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.985 17:10:51 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:00.985 17:10:51 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.985 17:10:51 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.986 17:10:51 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.986 17:10:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.986 17:10:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.986 17:10:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.986 17:10:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:22:00.986 17:10:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@47 -- # : 0 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.986 17:10:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:00.986 17:10:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:00.986 17:10:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:00.986 17:10:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:00.986 17:10:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:00.986 17:10:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:00.986 17:10:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:00.986 17:10:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:00.986 17:10:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:00.986 17:10:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:00.986 17:10:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:00.986 17:10:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:00.986 17:10:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ETQLTxpuW0 00:22:00.986 17:10:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:00.986 17:10:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:01.246 17:10:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ETQLTxpuW0 00:22:01.246 17:10:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ETQLTxpuW0 00:22:01.246 17:10:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ETQLTxpuW0 00:22:01.246 17:10:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:01.246 17:10:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:01.246 17:10:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:22:01.246 17:10:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:01.246 17:10:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:01.246 17:10:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:01.246 17:10:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.s1jRSHpDaT 00:22:01.246 17:10:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:01.246 17:10:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:01.246 17:10:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:01.246 17:10:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:01.246 17:10:51 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:01.246 17:10:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:01.246 17:10:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:01.246 17:10:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.s1jRSHpDaT 00:22:01.246 17:10:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.s1jRSHpDaT 00:22:01.246 17:10:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.s1jRSHpDaT 00:22:01.246 17:10:51 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.246 17:10:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=85139 00:22:01.246 17:10:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85139 00:22:01.246 17:10:51 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85139 ']' 00:22:01.246 17:10:51 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.246 17:10:51 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.246 17:10:51 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.246 17:10:51 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.246 17:10:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:01.246 [2024-07-15 17:10:51.392704] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:01.246 [2024-07-15 17:10:51.392785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85139 ] 00:22:01.246 [2024-07-15 17:10:51.528591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.505 [2024-07-15 17:10:51.652601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.505 [2024-07-15 17:10:51.710291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:02.073 17:10:52 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.073 17:10:52 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:02.073 17:10:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:02.073 17:10:52 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.073 17:10:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:02.073 [2024-07-15 17:10:52.355519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.332 null0 00:22:02.332 [2024-07-15 17:10:52.387476] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.332 [2024-07-15 17:10:52.387719] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:02.332 [2024-07-15 17:10:52.395474] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.332 17:10:52 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:02.332 [2024-07-15 17:10:52.407481] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:02.332 request: 00:22:02.332 { 00:22:02.332 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.332 "secure_channel": false, 00:22:02.332 "listen_address": { 00:22:02.332 "trtype": "tcp", 00:22:02.332 "traddr": "127.0.0.1", 00:22:02.332 "trsvcid": "4420" 00:22:02.332 }, 00:22:02.332 "method": "nvmf_subsystem_add_listener", 00:22:02.332 "req_id": 1 00:22:02.332 } 00:22:02.332 Got JSON-RPC error response 00:22:02.332 response: 00:22:02.332 { 00:22:02.332 "code": -32602, 00:22:02.332 "message": "Invalid parameters" 00:22:02.332 } 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.332 17:10:52 keyring_file -- keyring/file.sh@46 -- # bperfpid=85156 00:22:02.332 17:10:52 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:02.332 17:10:52 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85156 /var/tmp/bperf.sock 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85156 ']' 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.332 17:10:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:02.332 [2024-07-15 17:10:52.460168] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:02.332 [2024-07-15 17:10:52.460268] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85156 ] 00:22:02.332 [2024-07-15 17:10:52.606377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.590 [2024-07-15 17:10:52.712453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.590 [2024-07-15 17:10:52.765470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:03.156 17:10:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.156 17:10:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:03.156 17:10:53 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ETQLTxpuW0 00:22:03.156 17:10:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ETQLTxpuW0 00:22:03.414 17:10:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.s1jRSHpDaT 00:22:03.414 17:10:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.s1jRSHpDaT 00:22:03.671 17:10:53 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:22:03.671 17:10:53 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:22:03.671 17:10:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.671 17:10:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.671 17:10:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:03.930 17:10:54 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ETQLTxpuW0 == \/\t\m\p\/\t\m\p\.\E\T\Q\L\T\x\p\u\W\0 ]] 00:22:03.930 17:10:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:03.930 17:10:54 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:22:03.930 17:10:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.930 17:10:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.930 17:10:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:04.189 17:10:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.s1jRSHpDaT == \/\t\m\p\/\t\m\p\.\s\1\j\R\S\H\p\D\a\T ]] 00:22:04.189 17:10:54 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:22:04.189 17:10:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:04.189 17:10:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:04.189 17:10:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:04.189 17:10:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:04.189 17:10:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:04.448 17:10:54 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:22:04.448 17:10:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:22:04.448 17:10:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:04.448 17:10:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:04.448 17:10:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:04.448 17:10:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:04.448 17:10:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:04.707 17:10:54 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:04.707 17:10:54 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:04.707 17:10:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:05.008 [2024-07-15 17:10:55.151318] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:05.008 nvme0n1 00:22:05.008 17:10:55 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:22:05.008 17:10:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:05.008 17:10:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:05.008 17:10:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:05.008 17:10:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:05.008 17:10:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:05.266 17:10:55 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:22:05.266 17:10:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:22:05.266 17:10:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:05.266 17:10:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:05.266 17:10:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:05.266 17:10:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:05.266 17:10:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:05.525 17:10:55 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:22:05.525 17:10:55 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:05.784 Running I/O for 1 seconds... 00:22:06.719 00:22:06.719 Latency(us) 00:22:06.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.719 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:06.719 nvme0n1 : 1.01 11484.31 44.86 0.00 0.00 11107.93 5630.14 23712.12 00:22:06.719 =================================================================================================================== 00:22:06.719 Total : 11484.31 44.86 0.00 0.00 11107.93 5630.14 23712.12 00:22:06.719 0 00:22:06.719 17:10:56 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:06.719 17:10:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:06.977 17:10:57 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:22:06.977 17:10:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:06.977 17:10:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:06.977 17:10:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:06.977 17:10:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:06.977 17:10:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:07.235 17:10:57 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:22:07.235 17:10:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:22:07.235 17:10:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:07.235 17:10:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:07.235 17:10:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:07.235 17:10:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:07.235 17:10:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.494 17:10:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:07.494 17:10:57 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:07.494 17:10:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:07.494 17:10:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:07.494 17:10:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:07.494 17:10:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.494 17:10:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:07.494 17:10:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.494 17:10:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:07.494 17:10:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:07.752 [2024-07-15 17:10:57.963093] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:07.752 [2024-07-15 17:10:57.963799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae4590 (107): Transport endpoint is not connected 00:22:07.752 [2024-07-15 17:10:57.964788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae4590 (9): Bad file descriptor 00:22:07.752 [2024-07-15 17:10:57.965784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:07.752 [2024-07-15 17:10:57.965806] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:07.752 [2024-07-15 17:10:57.965831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:07.752 request: 00:22:07.752 { 00:22:07.752 "name": "nvme0", 00:22:07.752 "trtype": "tcp", 00:22:07.752 "traddr": "127.0.0.1", 00:22:07.752 "adrfam": "ipv4", 00:22:07.752 "trsvcid": "4420", 00:22:07.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:07.752 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:07.752 "prchk_reftag": false, 00:22:07.752 "prchk_guard": false, 00:22:07.752 "hdgst": false, 00:22:07.752 "ddgst": false, 00:22:07.752 "psk": "key1", 00:22:07.752 "method": "bdev_nvme_attach_controller", 00:22:07.752 "req_id": 1 00:22:07.752 } 00:22:07.753 Got JSON-RPC error response 00:22:07.753 response: 00:22:07.753 { 00:22:07.753 "code": -5, 00:22:07.753 "message": "Input/output error" 00:22:07.753 } 00:22:07.753 17:10:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:07.753 17:10:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.753 17:10:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.753 17:10:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.753 17:10:57 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:22:07.753 17:10:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:07.753 17:10:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:07.753 17:10:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:07.753 17:10:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.753 17:10:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:08.046 17:10:58 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:22:08.046 17:10:58 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:22:08.046 17:10:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:08.046 17:10:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:08.046 17:10:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:08.046 17:10:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:08.046 17:10:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:08.303 17:10:58 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:08.303 17:10:58 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:22:08.303 17:10:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:08.563 17:10:58 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:22:08.563 17:10:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:08.822 17:10:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:22:08.822 17:10:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:08.822 17:10:58 keyring_file -- keyring/file.sh@77 -- # jq length 00:22:09.081 17:10:59 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:22:09.081 17:10:59 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ETQLTxpuW0 00:22:09.081 17:10:59 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ETQLTxpuW0 00:22:09.081 17:10:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:09.081 17:10:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ETQLTxpuW0 00:22:09.081 17:10:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:09.081 17:10:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.081 17:10:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:09.081 17:10:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.081 17:10:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ETQLTxpuW0 00:22:09.081 17:10:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ETQLTxpuW0 00:22:09.339 [2024-07-15 17:10:59.431157] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ETQLTxpuW0': 0100660 00:22:09.339 [2024-07-15 17:10:59.431222] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:09.339 request: 00:22:09.339 { 00:22:09.339 "name": "key0", 00:22:09.339 "path": "/tmp/tmp.ETQLTxpuW0", 00:22:09.339 "method": "keyring_file_add_key", 00:22:09.339 "req_id": 1 00:22:09.339 } 00:22:09.339 Got JSON-RPC error response 00:22:09.339 response: 00:22:09.339 { 00:22:09.339 "code": -1, 00:22:09.339 "message": "Operation not permitted" 00:22:09.339 } 00:22:09.339 17:10:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:09.339 17:10:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:09.339 17:10:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:09.339 17:10:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:09.339 17:10:59 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ETQLTxpuW0 00:22:09.339 17:10:59 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ETQLTxpuW0 00:22:09.339 17:10:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ETQLTxpuW0 00:22:09.599 17:10:59 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ETQLTxpuW0 00:22:09.599 17:10:59 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:22:09.599 17:10:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:09.599 17:10:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:09.599 17:10:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:09.599 17:10:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:09.599 17:10:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:09.857 17:10:59 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:22:09.857 17:10:59 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:09.857 17:10:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:09.857 17:10:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:09.857 17:10:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:09.857 17:10:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.857 17:10:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:09.857 17:10:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.857 17:10:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:09.857 17:10:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:09.857 [2024-07-15 17:11:00.139409] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ETQLTxpuW0': No such file or directory 00:22:09.857 [2024-07-15 17:11:00.139461] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:09.858 [2024-07-15 17:11:00.139488] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:09.858 [2024-07-15 17:11:00.139506] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:09.858 [2024-07-15 17:11:00.139520] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:09.858 request: 00:22:09.858 { 00:22:09.858 "name": "nvme0", 00:22:09.858 "trtype": "tcp", 00:22:09.858 "traddr": "127.0.0.1", 00:22:09.858 "adrfam": "ipv4", 00:22:09.858 "trsvcid": "4420", 00:22:09.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:09.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:09.858 "prchk_reftag": false, 00:22:09.858 "prchk_guard": false, 00:22:09.858 "hdgst": false, 00:22:09.858 "ddgst": false, 00:22:09.858 "psk": "key0", 00:22:09.858 "method": "bdev_nvme_attach_controller", 00:22:09.858 "req_id": 1 00:22:09.858 } 00:22:09.858 Got JSON-RPC error response 00:22:09.858 response: 00:22:09.858 { 00:22:09.858 "code": -19, 00:22:09.858 "message": "No such device" 00:22:09.858 } 00:22:10.117 17:11:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:10.117 17:11:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:10.117 17:11:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:10.117 17:11:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:10.117 17:11:00 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:22:10.117 17:11:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:10.117 17:11:00 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:10.117 17:11:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:10.117 17:11:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:10.117 17:11:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:10.117 17:11:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:10.117 17:11:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:10.117 17:11:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.M071EIy4YK 00:22:10.117 17:11:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:10.117 17:11:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:10.117 17:11:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:10.117 17:11:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:10.117 17:11:00 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:10.117 17:11:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:10.117 17:11:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:10.376 17:11:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.M071EIy4YK 00:22:10.376 17:11:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.M071EIy4YK 00:22:10.376 17:11:00 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.M071EIy4YK 00:22:10.376 17:11:00 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.M071EIy4YK 00:22:10.376 17:11:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.M071EIy4YK 00:22:10.376 17:11:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:10.376 17:11:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:10.943 nvme0n1 00:22:10.943 17:11:01 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:22:10.943 17:11:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:10.943 17:11:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.943 17:11:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.943 17:11:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.943 17:11:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:11.201 17:11:01 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:22:11.201 17:11:01 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:22:11.201 17:11:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:11.201 17:11:01 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:22:11.201 17:11:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:11.201 17:11:01 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:22:11.201 17:11:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.201 17:11:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:11.472 17:11:01 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:22:11.472 17:11:01 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:22:11.472 17:11:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:11.472 17:11:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:11.472 17:11:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:11.472 17:11:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:11.472 17:11:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:11.747 17:11:01 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:22:11.747 17:11:01 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:11.747 17:11:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:12.043 17:11:02 keyring_file -- keyring/file.sh@104 -- # jq length 00:22:12.043 17:11:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:22:12.043 17:11:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:12.300 17:11:02 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:22:12.300 17:11:02 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.M071EIy4YK 00:22:12.300 17:11:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.M071EIy4YK 00:22:12.559 17:11:02 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.s1jRSHpDaT 00:22:12.559 17:11:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.s1jRSHpDaT 00:22:12.818 17:11:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:12.818 17:11:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:13.076 nvme0n1 00:22:13.076 17:11:03 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:22:13.076 17:11:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:13.336 17:11:03 keyring_file -- keyring/file.sh@112 -- # config='{ 00:22:13.336 "subsystems": [ 00:22:13.336 { 00:22:13.336 "subsystem": "keyring", 00:22:13.336 "config": [ 00:22:13.336 { 00:22:13.336 "method": "keyring_file_add_key", 00:22:13.336 "params": { 00:22:13.336 "name": "key0", 00:22:13.336 "path": "/tmp/tmp.M071EIy4YK" 00:22:13.336 } 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "method": "keyring_file_add_key", 00:22:13.336 "params": { 00:22:13.336 "name": "key1", 00:22:13.336 "path": "/tmp/tmp.s1jRSHpDaT" 00:22:13.336 } 00:22:13.336 } 00:22:13.336 ] 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "subsystem": "iobuf", 00:22:13.336 "config": [ 00:22:13.336 { 00:22:13.336 "method": "iobuf_set_options", 00:22:13.336 "params": { 00:22:13.336 "small_pool_count": 8192, 00:22:13.336 "large_pool_count": 1024, 00:22:13.336 "small_bufsize": 8192, 00:22:13.336 "large_bufsize": 135168 00:22:13.336 } 00:22:13.336 } 00:22:13.336 ] 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "subsystem": "sock", 00:22:13.336 "config": [ 00:22:13.336 { 00:22:13.336 "method": "sock_set_default_impl", 00:22:13.336 "params": { 00:22:13.336 "impl_name": "uring" 00:22:13.336 } 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "method": "sock_impl_set_options", 00:22:13.336 "params": { 00:22:13.336 "impl_name": "ssl", 00:22:13.336 "recv_buf_size": 4096, 00:22:13.336 "send_buf_size": 4096, 00:22:13.336 "enable_recv_pipe": true, 00:22:13.336 "enable_quickack": false, 00:22:13.336 "enable_placement_id": 0, 00:22:13.336 "enable_zerocopy_send_server": true, 00:22:13.336 "enable_zerocopy_send_client": false, 00:22:13.336 "zerocopy_threshold": 0, 00:22:13.336 "tls_version": 0, 00:22:13.336 "enable_ktls": false 00:22:13.336 } 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "method": "sock_impl_set_options", 00:22:13.336 "params": { 00:22:13.336 "impl_name": "posix", 00:22:13.336 "recv_buf_size": 2097152, 00:22:13.336 "send_buf_size": 2097152, 00:22:13.336 "enable_recv_pipe": true, 00:22:13.336 "enable_quickack": false, 00:22:13.336 "enable_placement_id": 0, 00:22:13.336 "enable_zerocopy_send_server": true, 00:22:13.336 "enable_zerocopy_send_client": false, 00:22:13.336 "zerocopy_threshold": 0, 00:22:13.336 "tls_version": 0, 00:22:13.336 "enable_ktls": false 00:22:13.336 } 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "method": "sock_impl_set_options", 00:22:13.336 "params": { 00:22:13.336 "impl_name": "uring", 00:22:13.336 "recv_buf_size": 2097152, 00:22:13.336 "send_buf_size": 2097152, 00:22:13.336 "enable_recv_pipe": true, 00:22:13.336 "enable_quickack": false, 00:22:13.336 "enable_placement_id": 0, 00:22:13.336 "enable_zerocopy_send_server": false, 00:22:13.336 "enable_zerocopy_send_client": false, 00:22:13.336 "zerocopy_threshold": 0, 00:22:13.336 "tls_version": 0, 00:22:13.336 "enable_ktls": false 00:22:13.336 } 00:22:13.336 } 00:22:13.336 ] 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "subsystem": "vmd", 00:22:13.336 "config": [] 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "subsystem": "accel", 00:22:13.336 "config": [ 00:22:13.336 { 00:22:13.336 "method": "accel_set_options", 00:22:13.336 "params": { 00:22:13.336 "small_cache_size": 128, 00:22:13.336 "large_cache_size": 16, 00:22:13.336 "task_count": 2048, 00:22:13.336 "sequence_count": 2048, 00:22:13.336 "buf_count": 2048 00:22:13.336 } 00:22:13.336 } 00:22:13.336 ] 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "subsystem": "bdev", 00:22:13.336 "config": [ 00:22:13.336 { 00:22:13.336 "method": "bdev_set_options", 00:22:13.336 "params": { 00:22:13.336 "bdev_io_pool_size": 65535, 00:22:13.336 "bdev_io_cache_size": 256, 00:22:13.336 "bdev_auto_examine": true, 00:22:13.336 "iobuf_small_cache_size": 128, 00:22:13.336 "iobuf_large_cache_size": 16 00:22:13.336 } 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "method": "bdev_raid_set_options", 00:22:13.336 "params": { 00:22:13.336 "process_window_size_kb": 1024 00:22:13.336 } 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "method": "bdev_iscsi_set_options", 00:22:13.336 "params": { 00:22:13.336 "timeout_sec": 30 00:22:13.336 } 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "method": "bdev_nvme_set_options", 00:22:13.336 "params": { 00:22:13.336 "action_on_timeout": "none", 00:22:13.336 "timeout_us": 0, 00:22:13.336 "timeout_admin_us": 0, 00:22:13.336 "keep_alive_timeout_ms": 10000, 00:22:13.336 "arbitration_burst": 0, 00:22:13.336 "low_priority_weight": 0, 00:22:13.336 "medium_priority_weight": 0, 00:22:13.336 "high_priority_weight": 0, 00:22:13.336 "nvme_adminq_poll_period_us": 10000, 00:22:13.336 "nvme_ioq_poll_period_us": 0, 00:22:13.336 "io_queue_requests": 512, 00:22:13.336 "delay_cmd_submit": true, 00:22:13.336 "transport_retry_count": 4, 00:22:13.336 "bdev_retry_count": 3, 00:22:13.336 "transport_ack_timeout": 0, 00:22:13.336 "ctrlr_loss_timeout_sec": 0, 00:22:13.336 "reconnect_delay_sec": 0, 00:22:13.336 "fast_io_fail_timeout_sec": 0, 00:22:13.336 "disable_auto_failback": false, 00:22:13.336 "generate_uuids": false, 00:22:13.336 "transport_tos": 0, 00:22:13.336 "nvme_error_stat": false, 00:22:13.336 "rdma_srq_size": 0, 00:22:13.336 "io_path_stat": false, 00:22:13.336 "allow_accel_sequence": false, 00:22:13.336 "rdma_max_cq_size": 0, 00:22:13.336 "rdma_cm_event_timeout_ms": 0, 00:22:13.336 "dhchap_digests": [ 00:22:13.336 "sha256", 00:22:13.336 "sha384", 00:22:13.336 "sha512" 00:22:13.336 ], 00:22:13.336 "dhchap_dhgroups": [ 00:22:13.336 "null", 00:22:13.336 "ffdhe2048", 00:22:13.336 "ffdhe3072", 00:22:13.336 "ffdhe4096", 00:22:13.336 "ffdhe6144", 00:22:13.336 "ffdhe8192" 00:22:13.336 ] 00:22:13.336 } 00:22:13.336 }, 00:22:13.336 { 00:22:13.336 "method": "bdev_nvme_attach_controller", 00:22:13.336 "params": { 00:22:13.336 "name": "nvme0", 00:22:13.336 "trtype": "TCP", 00:22:13.336 "adrfam": "IPv4", 00:22:13.336 "traddr": "127.0.0.1", 00:22:13.336 "trsvcid": "4420", 00:22:13.336 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.337 "prchk_reftag": false, 00:22:13.337 "prchk_guard": false, 00:22:13.337 "ctrlr_loss_timeout_sec": 0, 00:22:13.337 "reconnect_delay_sec": 0, 00:22:13.337 "fast_io_fail_timeout_sec": 0, 00:22:13.337 "psk": "key0", 00:22:13.337 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:13.337 "hdgst": false, 00:22:13.337 "ddgst": false 00:22:13.337 } 00:22:13.337 }, 00:22:13.337 { 00:22:13.337 "method": "bdev_nvme_set_hotplug", 00:22:13.337 "params": { 00:22:13.337 "period_us": 100000, 00:22:13.337 "enable": false 00:22:13.337 } 00:22:13.337 }, 00:22:13.337 { 00:22:13.337 "method": "bdev_wait_for_examine" 00:22:13.337 } 00:22:13.337 ] 00:22:13.337 }, 00:22:13.337 { 00:22:13.337 "subsystem": "nbd", 00:22:13.337 "config": [] 00:22:13.337 } 00:22:13.337 ] 00:22:13.337 }' 00:22:13.337 17:11:03 keyring_file -- keyring/file.sh@114 -- # killprocess 85156 00:22:13.337 17:11:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85156 ']' 00:22:13.337 17:11:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85156 00:22:13.337 17:11:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:13.337 17:11:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:13.337 17:11:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85156 00:22:13.337 killing process with pid 85156 00:22:13.337 Received shutdown signal, test time was about 1.000000 seconds 00:22:13.337 00:22:13.337 Latency(us) 00:22:13.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.337 =================================================================================================================== 00:22:13.337 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.337 17:11:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:13.337 17:11:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:13.337 17:11:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85156' 00:22:13.337 17:11:03 keyring_file -- common/autotest_common.sh@967 -- # kill 85156 00:22:13.337 17:11:03 keyring_file -- common/autotest_common.sh@972 -- # wait 85156 00:22:13.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:13.604 17:11:03 keyring_file -- keyring/file.sh@117 -- # bperfpid=85405 00:22:13.604 17:11:03 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:13.604 17:11:03 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85405 /var/tmp/bperf.sock 00:22:13.604 17:11:03 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85405 ']' 00:22:13.604 17:11:03 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:13.604 17:11:03 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.604 17:11:03 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:13.604 17:11:03 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:22:13.604 "subsystems": [ 00:22:13.604 { 00:22:13.604 "subsystem": "keyring", 00:22:13.604 "config": [ 00:22:13.604 { 00:22:13.604 "method": "keyring_file_add_key", 00:22:13.604 "params": { 00:22:13.604 "name": "key0", 00:22:13.604 "path": "/tmp/tmp.M071EIy4YK" 00:22:13.604 } 00:22:13.604 }, 00:22:13.604 { 00:22:13.604 "method": "keyring_file_add_key", 00:22:13.604 "params": { 00:22:13.604 "name": "key1", 00:22:13.604 "path": "/tmp/tmp.s1jRSHpDaT" 00:22:13.604 } 00:22:13.604 } 00:22:13.604 ] 00:22:13.604 }, 00:22:13.604 { 00:22:13.604 "subsystem": "iobuf", 00:22:13.604 "config": [ 00:22:13.604 { 00:22:13.604 "method": "iobuf_set_options", 00:22:13.604 "params": { 00:22:13.604 "small_pool_count": 8192, 00:22:13.604 "large_pool_count": 1024, 00:22:13.604 "small_bufsize": 8192, 00:22:13.604 "large_bufsize": 135168 00:22:13.604 } 00:22:13.604 } 00:22:13.604 ] 00:22:13.604 }, 00:22:13.604 { 00:22:13.604 "subsystem": "sock", 00:22:13.604 "config": [ 00:22:13.604 { 00:22:13.604 "method": "sock_set_default_impl", 00:22:13.604 "params": { 00:22:13.604 "impl_name": "uring" 00:22:13.604 } 00:22:13.604 }, 00:22:13.604 { 00:22:13.604 "method": "sock_impl_set_options", 00:22:13.604 "params": { 00:22:13.604 "impl_name": "ssl", 00:22:13.604 "recv_buf_size": 4096, 00:22:13.604 "send_buf_size": 4096, 00:22:13.604 "enable_recv_pipe": true, 00:22:13.604 "enable_quickack": false, 00:22:13.604 "enable_placement_id": 0, 00:22:13.604 "enable_zerocopy_send_server": true, 00:22:13.604 "enable_zerocopy_send_client": false, 00:22:13.604 "zerocopy_threshold": 0, 00:22:13.604 "tls_version": 0, 00:22:13.604 "enable_ktls": false 00:22:13.604 } 00:22:13.604 }, 00:22:13.604 { 00:22:13.604 "method": "sock_impl_set_options", 00:22:13.604 "params": { 00:22:13.604 "impl_name": "posix", 00:22:13.604 "recv_buf_size": 2097152, 00:22:13.604 "send_buf_size": 2097152, 00:22:13.604 "enable_recv_pipe": true, 00:22:13.604 "enable_quickack": false, 00:22:13.604 "enable_placement_id": 0, 00:22:13.604 "enable_zerocopy_send_server": true, 00:22:13.604 "enable_zerocopy_send_client": false, 00:22:13.604 "zerocopy_threshold": 0, 00:22:13.604 "tls_version": 0, 00:22:13.604 "enable_ktls": false 00:22:13.604 } 00:22:13.604 }, 00:22:13.604 { 00:22:13.604 "method": "sock_impl_set_options", 00:22:13.604 "params": { 00:22:13.604 "impl_name": "uring", 00:22:13.604 "recv_buf_size": 2097152, 00:22:13.604 "send_buf_size": 2097152, 00:22:13.604 "enable_recv_pipe": true, 00:22:13.604 "enable_quickack": false, 00:22:13.604 "enable_placement_id": 0, 00:22:13.604 "enable_zerocopy_send_server": false, 00:22:13.604 "enable_zerocopy_send_client": false, 00:22:13.604 "zerocopy_threshold": 0, 00:22:13.604 "tls_version": 0, 00:22:13.604 "enable_ktls": false 00:22:13.604 } 00:22:13.604 } 00:22:13.604 ] 00:22:13.604 }, 00:22:13.604 { 00:22:13.604 "subsystem": "vmd", 00:22:13.604 "config": [] 00:22:13.604 }, 00:22:13.604 { 00:22:13.604 "subsystem": "accel", 00:22:13.604 "config": [ 00:22:13.604 { 00:22:13.604 "method": "accel_set_options", 00:22:13.604 "params": { 00:22:13.604 "small_cache_size": 128, 00:22:13.604 "large_cache_size": 16, 00:22:13.604 "task_count": 2048, 00:22:13.604 "sequence_count": 2048, 00:22:13.604 "buf_count": 2048 00:22:13.604 } 00:22:13.605 } 00:22:13.605 ] 00:22:13.605 }, 00:22:13.605 { 00:22:13.605 "subsystem": "bdev", 00:22:13.605 "config": [ 00:22:13.605 { 00:22:13.605 "method": "bdev_set_options", 00:22:13.605 "params": { 00:22:13.605 "bdev_io_pool_size": 65535, 00:22:13.605 "bdev_io_cache_size": 256, 00:22:13.605 "bdev_auto_examine": true, 00:22:13.605 "iobuf_small_cache_size": 128, 00:22:13.605 "iobuf_large_cache_size": 16 00:22:13.605 } 00:22:13.605 }, 00:22:13.605 { 00:22:13.605 "method": "bdev_raid_set_options", 00:22:13.605 "params": { 00:22:13.605 "process_window_size_kb": 1024 00:22:13.605 } 00:22:13.605 }, 00:22:13.605 { 00:22:13.605 "method": "bdev_iscsi_set_options", 00:22:13.605 "params": { 00:22:13.605 "timeout_sec": 30 00:22:13.605 } 00:22:13.605 }, 00:22:13.605 { 00:22:13.605 "method": "bdev_nvme_set_options", 00:22:13.605 "params": { 00:22:13.605 "action_on_timeout": "none", 00:22:13.605 "timeout_us": 0, 00:22:13.605 "timeout_admin_us": 0, 00:22:13.605 "keep_alive_timeout_ms": 10000, 00:22:13.605 "arbitration_burst": 0, 00:22:13.605 "low_priority_weight": 0, 00:22:13.605 "medium_priority_weight": 0, 00:22:13.605 "high_priority_weight": 0, 00:22:13.605 "nvme_adminq_poll_period_us": 10000, 00:22:13.605 "nvme_ioq_poll_period_us": 0, 00:22:13.605 "io_queue_requests": 512, 00:22:13.605 "delay_cmd_submit": true, 00:22:13.605 "transport_retry_count": 4, 00:22:13.605 "bdev_retry_count": 3, 00:22:13.605 "transport_ack_timeout": 0, 00:22:13.605 "ctrlr_loss_timeout_sec": 0, 00:22:13.605 "reconnect_delay_sec": 0, 00:22:13.605 "fast_io_fail_timeout_sec": 0, 00:22:13.605 "disable_auto_failback": false, 00:22:13.605 "generate_uuids": false, 00:22:13.605 "transport_tos": 0, 00:22:13.605 "nvme_error_stat": false, 00:22:13.605 "rdma_srq_size": 0, 00:22:13.605 "io_path_stat": false, 00:22:13.605 "allow_accel_sequence": false, 00:22:13.605 "rdma_max_cq_size": 0, 00:22:13.605 "rdma_cm_event_timeout_ms": 0, 00:22:13.605 "dhchap_digests": [ 00:22:13.605 "sha256", 00:22:13.605 "sha384", 00:22:13.605 "sha512" 00:22:13.605 ], 00:22:13.605 "dhchap_dhgroups": [ 00:22:13.605 "null", 00:22:13.605 "ffdhe2048", 00:22:13.605 "ffdhe3072", 00:22:13.605 "ffdhe4096", 00:22:13.605 "ffdhe6144", 00:22:13.605 "ffdhe8192" 00:22:13.605 ] 00:22:13.605 } 00:22:13.605 }, 00:22:13.605 { 00:22:13.605 "method": "bdev_nvme_attach_controller", 00:22:13.605 "params": { 00:22:13.605 "name": "nvme0", 00:22:13.605 "trtype": "TCP", 00:22:13.605 "adrfam": "IPv4", 00:22:13.605 "traddr": "127.0.0.1", 00:22:13.605 "trsvcid": "4420", 00:22:13.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.605 "prchk_reftag": false, 00:22:13.605 "prchk_guard": false, 00:22:13.605 "ctrlr_loss_timeout_sec": 0, 00:22:13.605 "reconnect_delay_sec": 0, 00:22:13.605 "fast_io_fail_timeout_sec": 0, 00:22:13.605 "psk": "key0", 00:22:13.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:13.605 "hdgst": false, 00:22:13.605 "ddgst": false 00:22:13.605 } 00:22:13.605 }, 00:22:13.605 { 00:22:13.605 "method": "bdev_nvme_set_hotplug", 00:22:13.605 "params": { 00:22:13.605 "period_us": 100000, 00:22:13.605 "enable": false 00:22:13.605 } 00:22:13.605 }, 00:22:13.605 { 00:22:13.605 "method": "bdev_wait_for_examine" 00:22:13.605 } 00:22:13.605 ] 00:22:13.605 }, 00:22:13.605 { 00:22:13.605 "subsystem": "nbd", 00:22:13.605 "config": [] 00:22:13.605 } 00:22:13.605 ] 00:22:13.605 }' 00:22:13.605 17:11:03 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.605 17:11:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:13.605 [2024-07-15 17:11:03.899022] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:13.605 [2024-07-15 17:11:03.899110] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85405 ] 00:22:13.864 [2024-07-15 17:11:04.032519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.864 [2024-07-15 17:11:04.140592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.123 [2024-07-15 17:11:04.275534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:14.123 [2024-07-15 17:11:04.329101] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.690 17:11:04 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.690 17:11:04 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:14.690 17:11:04 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:22:14.690 17:11:04 keyring_file -- keyring/file.sh@120 -- # jq length 00:22:14.690 17:11:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:14.949 17:11:05 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:22:14.949 17:11:05 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:22:14.949 17:11:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:14.949 17:11:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:14.949 17:11:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:14.949 17:11:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:14.949 17:11:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:15.207 17:11:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:15.207 17:11:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:22:15.207 17:11:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:15.207 17:11:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:15.207 17:11:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:15.207 17:11:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:15.207 17:11:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:15.464 17:11:05 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:22:15.464 17:11:05 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:22:15.464 17:11:05 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:22:15.464 17:11:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:15.722 17:11:05 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:22:15.722 17:11:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:15.722 17:11:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.M071EIy4YK /tmp/tmp.s1jRSHpDaT 00:22:15.722 17:11:05 keyring_file -- keyring/file.sh@20 -- # killprocess 85405 00:22:15.722 17:11:05 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85405 ']' 00:22:15.722 17:11:05 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85405 00:22:15.722 17:11:05 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:15.722 17:11:05 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.722 17:11:05 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85405 00:22:15.722 killing process with pid 85405 00:22:15.722 Received shutdown signal, test time was about 1.000000 seconds 00:22:15.722 00:22:15.722 Latency(us) 00:22:15.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.723 =================================================================================================================== 00:22:15.723 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:15.723 17:11:05 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:15.723 17:11:05 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:15.723 17:11:05 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85405' 00:22:15.723 17:11:05 keyring_file -- common/autotest_common.sh@967 -- # kill 85405 00:22:15.723 17:11:05 keyring_file -- common/autotest_common.sh@972 -- # wait 85405 00:22:15.979 17:11:06 keyring_file -- keyring/file.sh@21 -- # killprocess 85139 00:22:15.979 17:11:06 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85139 ']' 00:22:15.979 17:11:06 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85139 00:22:15.979 17:11:06 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:15.979 17:11:06 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.979 17:11:06 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85139 00:22:15.979 killing process with pid 85139 00:22:15.979 17:11:06 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:15.979 17:11:06 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:15.979 17:11:06 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85139' 00:22:15.979 17:11:06 keyring_file -- common/autotest_common.sh@967 -- # kill 85139 00:22:15.979 [2024-07-15 17:11:06.193933] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:15.979 17:11:06 keyring_file -- common/autotest_common.sh@972 -- # wait 85139 00:22:16.543 00:22:16.543 real 0m15.455s 00:22:16.543 user 0m38.342s 00:22:16.543 sys 0m3.029s 00:22:16.543 17:11:06 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:16.543 ************************************ 00:22:16.543 END TEST keyring_file 00:22:16.543 ************************************ 00:22:16.543 17:11:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:16.543 17:11:06 -- common/autotest_common.sh@1142 -- # return 0 00:22:16.543 17:11:06 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:22:16.543 17:11:06 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:16.543 17:11:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:16.543 17:11:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.543 17:11:06 -- common/autotest_common.sh@10 -- # set +x 00:22:16.543 ************************************ 00:22:16.543 START TEST keyring_linux 00:22:16.543 ************************************ 00:22:16.543 17:11:06 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:16.543 * Looking for test storage... 00:22:16.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:16.543 17:11:06 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0b4e8503-7bac-4879-926a-209303c4b3da 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=0b4e8503-7bac-4879-926a-209303c4b3da 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:16.543 17:11:06 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.543 17:11:06 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.543 17:11:06 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.543 17:11:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.543 17:11:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.543 17:11:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.543 17:11:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:16.543 17:11:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:16.543 17:11:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:16.543 17:11:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:16.543 17:11:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:16.543 17:11:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:16.543 17:11:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:16.543 17:11:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:16.543 /tmp/:spdk-test:key0 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:16.543 17:11:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:16.543 17:11:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:16.543 17:11:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:16.801 17:11:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:16.801 /tmp/:spdk-test:key1 00:22:16.801 17:11:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:16.801 17:11:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85532 00:22:16.801 17:11:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85532 00:22:16.801 17:11:06 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85532 ']' 00:22:16.801 17:11:06 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.801 17:11:06 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.801 17:11:06 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.801 17:11:06 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.801 17:11:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:16.801 17:11:06 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:16.801 [2024-07-15 17:11:06.901798] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:16.801 [2024-07-15 17:11:06.901893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85532 ] 00:22:16.801 [2024-07-15 17:11:07.037524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.058 [2024-07-15 17:11:07.149063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.058 [2024-07-15 17:11:07.202568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:17.628 17:11:07 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.628 17:11:07 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:17.628 17:11:07 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:17.628 17:11:07 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.628 17:11:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:17.628 [2024-07-15 17:11:07.851430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.628 null0 00:22:17.628 [2024-07-15 17:11:07.883381] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:17.628 [2024-07-15 17:11:07.883610] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:17.628 17:11:07 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.628 17:11:07 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:17.628 429708116 00:22:17.628 17:11:07 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:17.628 184326198 00:22:17.628 17:11:07 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85554 00:22:17.628 17:11:07 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:17.628 17:11:07 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85554 /var/tmp/bperf.sock 00:22:17.628 17:11:07 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85554 ']' 00:22:17.628 17:11:07 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:17.628 17:11:07 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:17.628 17:11:07 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:17.628 17:11:07 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.628 17:11:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:17.887 [2024-07-15 17:11:07.956823] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:17.887 [2024-07-15 17:11:07.957302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85554 ] 00:22:17.887 [2024-07-15 17:11:08.089932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.145 [2024-07-15 17:11:08.201978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.711 17:11:08 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.711 17:11:08 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:18.711 17:11:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:18.711 17:11:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:18.969 17:11:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:18.969 17:11:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:19.252 [2024-07-15 17:11:09.451157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:19.252 17:11:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:19.252 17:11:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:19.511 [2024-07-15 17:11:09.708305] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:19.511 nvme0n1 00:22:19.511 17:11:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:19.511 17:11:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:19.511 17:11:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:19.511 17:11:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:19.511 17:11:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:19.511 17:11:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:19.768 17:11:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:19.768 17:11:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:19.768 17:11:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:19.768 17:11:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:19.768 17:11:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:19.768 17:11:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:19.768 17:11:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:20.026 17:11:10 keyring_linux -- keyring/linux.sh@25 -- # sn=429708116 00:22:20.026 17:11:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:20.026 17:11:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:20.026 17:11:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 429708116 == \4\2\9\7\0\8\1\1\6 ]] 00:22:20.026 17:11:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 429708116 00:22:20.026 17:11:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:20.026 17:11:10 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:20.284 Running I/O for 1 seconds... 00:22:21.220 00:22:21.220 Latency(us) 00:22:21.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.221 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:21.221 nvme0n1 : 1.01 11533.88 45.05 0.00 0.00 11029.42 2964.01 12511.42 00:22:21.221 =================================================================================================================== 00:22:21.221 Total : 11533.88 45.05 0.00 0.00 11029.42 2964.01 12511.42 00:22:21.221 0 00:22:21.221 17:11:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:21.221 17:11:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:21.479 17:11:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:21.479 17:11:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:21.479 17:11:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:21.479 17:11:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:21.479 17:11:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:21.479 17:11:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:21.737 17:11:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:21.737 17:11:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:21.737 17:11:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:21.737 17:11:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:21.737 17:11:11 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:22:21.737 17:11:11 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:21.737 17:11:11 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:21.737 17:11:11 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.737 17:11:11 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:21.737 17:11:11 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.738 17:11:11 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:21.738 17:11:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:21.996 [2024-07-15 17:11:12.127635] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:21.996 [2024-07-15 17:11:12.128315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168f460 (107): Transport endpoint is not connected 00:22:21.996 [2024-07-15 17:11:12.129304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168f460 (9): Bad file descriptor 00:22:21.996 [2024-07-15 17:11:12.130300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:21.996 [2024-07-15 17:11:12.130320] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:21.996 [2024-07-15 17:11:12.130330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:21.996 request: 00:22:21.996 { 00:22:21.996 "name": "nvme0", 00:22:21.996 "trtype": "tcp", 00:22:21.996 "traddr": "127.0.0.1", 00:22:21.996 "adrfam": "ipv4", 00:22:21.996 "trsvcid": "4420", 00:22:21.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:21.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:21.996 "prchk_reftag": false, 00:22:21.996 "prchk_guard": false, 00:22:21.996 "hdgst": false, 00:22:21.996 "ddgst": false, 00:22:21.996 "psk": ":spdk-test:key1", 00:22:21.996 "method": "bdev_nvme_attach_controller", 00:22:21.996 "req_id": 1 00:22:21.996 } 00:22:21.996 Got JSON-RPC error response 00:22:21.996 response: 00:22:21.996 { 00:22:21.996 "code": -5, 00:22:21.996 "message": "Input/output error" 00:22:21.996 } 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@33 -- # sn=429708116 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 429708116 00:22:21.996 1 links removed 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@33 -- # sn=184326198 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 184326198 00:22:21.996 1 links removed 00:22:21.996 17:11:12 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85554 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85554 ']' 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85554 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85554 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:21.996 killing process with pid 85554 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85554' 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@967 -- # kill 85554 00:22:21.996 Received shutdown signal, test time was about 1.000000 seconds 00:22:21.996 00:22:21.996 Latency(us) 00:22:21.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.996 =================================================================================================================== 00:22:21.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.996 17:11:12 keyring_linux -- common/autotest_common.sh@972 -- # wait 85554 00:22:22.255 17:11:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85532 00:22:22.255 17:11:12 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85532 ']' 00:22:22.255 17:11:12 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85532 00:22:22.255 17:11:12 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:22.255 17:11:12 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.255 17:11:12 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85532 00:22:22.255 17:11:12 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:22.255 17:11:12 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:22.255 killing process with pid 85532 00:22:22.255 17:11:12 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85532' 00:22:22.255 17:11:12 keyring_linux -- common/autotest_common.sh@967 -- # kill 85532 00:22:22.255 17:11:12 keyring_linux -- common/autotest_common.sh@972 -- # wait 85532 00:22:22.822 ************************************ 00:22:22.822 END TEST keyring_linux 00:22:22.822 ************************************ 00:22:22.822 00:22:22.822 real 0m6.185s 00:22:22.822 user 0m11.863s 00:22:22.822 sys 0m1.554s 00:22:22.822 17:11:12 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:22.822 17:11:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:22.822 17:11:12 -- common/autotest_common.sh@1142 -- # return 0 00:22:22.822 17:11:12 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:22:22.822 17:11:12 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:22.822 17:11:12 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:22.822 17:11:12 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:22:22.822 17:11:12 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:22:22.822 17:11:12 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:22:22.822 17:11:12 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:22.822 17:11:12 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:22.822 17:11:12 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:22.822 17:11:12 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:22:22.822 17:11:12 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:22.822 17:11:12 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:22:22.822 17:11:12 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:22.822 17:11:12 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:22.822 17:11:12 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:22.822 17:11:12 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:22:22.822 17:11:12 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:22:22.822 17:11:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:22.822 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:22:22.822 17:11:12 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:22:22.822 17:11:12 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:22.822 17:11:12 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:22.822 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:22:24.210 INFO: APP EXITING 00:22:24.210 INFO: killing all VMs 00:22:24.210 INFO: killing vhost app 00:22:24.210 INFO: EXIT DONE 00:22:24.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:24.779 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:24.779 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:25.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:25.715 Cleaning 00:22:25.715 Removing: /var/run/dpdk/spdk0/config 00:22:25.715 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:25.715 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:25.715 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:25.715 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:25.715 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:25.715 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:25.715 Removing: /var/run/dpdk/spdk1/config 00:22:25.715 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:25.715 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:25.715 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:25.715 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:25.716 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:25.716 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:25.716 Removing: /var/run/dpdk/spdk2/config 00:22:25.716 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:25.716 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:25.716 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:25.716 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:25.716 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:25.716 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:25.716 Removing: /var/run/dpdk/spdk3/config 00:22:25.716 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:25.716 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:25.716 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:25.716 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:25.716 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:25.716 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:25.716 Removing: /var/run/dpdk/spdk4/config 00:22:25.716 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:25.716 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:25.716 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:25.716 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:25.716 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:25.716 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:25.716 Removing: /dev/shm/nvmf_trace.0 00:22:25.716 Removing: /dev/shm/spdk_tgt_trace.pid58616 00:22:25.716 Removing: /var/run/dpdk/spdk0 00:22:25.716 Removing: /var/run/dpdk/spdk1 00:22:25.716 Removing: /var/run/dpdk/spdk2 00:22:25.716 Removing: /var/run/dpdk/spdk3 00:22:25.716 Removing: /var/run/dpdk/spdk4 00:22:25.716 Removing: /var/run/dpdk/spdk_pid58471 00:22:25.716 Removing: /var/run/dpdk/spdk_pid58616 00:22:25.716 Removing: /var/run/dpdk/spdk_pid58814 00:22:25.716 Removing: /var/run/dpdk/spdk_pid58895 00:22:25.716 Removing: /var/run/dpdk/spdk_pid58922 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59032 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59050 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59168 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59353 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59499 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59569 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59640 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59725 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59802 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59835 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59871 00:22:25.716 Removing: /var/run/dpdk/spdk_pid59932 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60032 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60464 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60511 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60562 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60578 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60645 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60661 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60728 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60744 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60795 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60813 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60853 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60871 00:22:25.716 Removing: /var/run/dpdk/spdk_pid60994 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61029 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61098 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61155 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61180 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61238 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61273 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61307 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61342 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61376 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61411 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61445 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61480 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61514 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61549 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61582 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61618 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61647 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61687 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61716 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61756 00:22:25.716 Removing: /var/run/dpdk/spdk_pid61785 00:22:25.974 Removing: /var/run/dpdk/spdk_pid61828 00:22:25.974 Removing: /var/run/dpdk/spdk_pid61866 00:22:25.974 Removing: /var/run/dpdk/spdk_pid61900 00:22:25.975 Removing: /var/run/dpdk/spdk_pid61938 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62002 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62095 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62402 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62415 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62454 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62467 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62483 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62507 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62521 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62542 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62561 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62580 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62595 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62620 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62628 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62649 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62668 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62687 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62708 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62727 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62735 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62757 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62793 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62807 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62836 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62900 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62934 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62938 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62972 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62988 00:22:25.975 Removing: /var/run/dpdk/spdk_pid62990 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63039 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63053 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63081 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63096 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63106 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63115 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63125 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63140 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63149 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63159 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63187 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63214 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63229 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63257 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63267 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63280 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63320 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63332 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63364 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63371 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63379 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63392 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63394 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63407 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63409 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63422 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63496 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63548 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63654 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63687 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63732 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63752 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63772 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63792 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63823 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63844 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63914 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63930 00:22:25.975 Removing: /var/run/dpdk/spdk_pid63980 00:22:25.975 Removing: /var/run/dpdk/spdk_pid64058 00:22:25.975 Removing: /var/run/dpdk/spdk_pid64114 00:22:25.975 Removing: /var/run/dpdk/spdk_pid64143 00:22:25.975 Removing: /var/run/dpdk/spdk_pid64234 00:22:25.975 Removing: /var/run/dpdk/spdk_pid64277 00:22:25.975 Removing: /var/run/dpdk/spdk_pid64315 00:22:25.975 Removing: /var/run/dpdk/spdk_pid64534 00:22:25.975 Removing: /var/run/dpdk/spdk_pid64631 00:22:25.975 Removing: /var/run/dpdk/spdk_pid64660 00:22:25.975 Removing: /var/run/dpdk/spdk_pid64976 00:22:25.975 Removing: /var/run/dpdk/spdk_pid65020 00:22:26.236 Removing: /var/run/dpdk/spdk_pid65313 00:22:26.236 Removing: /var/run/dpdk/spdk_pid65725 00:22:26.236 Removing: /var/run/dpdk/spdk_pid65995 00:22:26.236 Removing: /var/run/dpdk/spdk_pid66778 00:22:26.236 Removing: /var/run/dpdk/spdk_pid67605 00:22:26.236 Removing: /var/run/dpdk/spdk_pid67721 00:22:26.236 Removing: /var/run/dpdk/spdk_pid67789 00:22:26.236 Removing: /var/run/dpdk/spdk_pid69044 00:22:26.236 Removing: /var/run/dpdk/spdk_pid69250 00:22:26.236 Removing: /var/run/dpdk/spdk_pid72585 00:22:26.236 Removing: /var/run/dpdk/spdk_pid72893 00:22:26.236 Removing: /var/run/dpdk/spdk_pid73002 00:22:26.236 Removing: /var/run/dpdk/spdk_pid73136 00:22:26.236 Removing: /var/run/dpdk/spdk_pid73163 00:22:26.236 Removing: /var/run/dpdk/spdk_pid73191 00:22:26.236 Removing: /var/run/dpdk/spdk_pid73217 00:22:26.236 Removing: /var/run/dpdk/spdk_pid73311 00:22:26.236 Removing: /var/run/dpdk/spdk_pid73446 00:22:26.236 Removing: /var/run/dpdk/spdk_pid73598 00:22:26.236 Removing: /var/run/dpdk/spdk_pid73679 00:22:26.236 Removing: /var/run/dpdk/spdk_pid73867 00:22:26.236 Removing: /var/run/dpdk/spdk_pid73948 00:22:26.236 Removing: /var/run/dpdk/spdk_pid74041 00:22:26.236 Removing: /var/run/dpdk/spdk_pid74351 00:22:26.236 Removing: /var/run/dpdk/spdk_pid74739 00:22:26.236 Removing: /var/run/dpdk/spdk_pid74741 00:22:26.236 Removing: /var/run/dpdk/spdk_pid75020 00:22:26.236 Removing: /var/run/dpdk/spdk_pid75038 00:22:26.236 Removing: /var/run/dpdk/spdk_pid75052 00:22:26.236 Removing: /var/run/dpdk/spdk_pid75085 00:22:26.236 Removing: /var/run/dpdk/spdk_pid75090 00:22:26.236 Removing: /var/run/dpdk/spdk_pid75391 00:22:26.236 Removing: /var/run/dpdk/spdk_pid75434 00:22:26.236 Removing: /var/run/dpdk/spdk_pid75717 00:22:26.236 Removing: /var/run/dpdk/spdk_pid75919 00:22:26.236 Removing: /var/run/dpdk/spdk_pid76301 00:22:26.236 Removing: /var/run/dpdk/spdk_pid76803 00:22:26.236 Removing: /var/run/dpdk/spdk_pid77619 00:22:26.236 Removing: /var/run/dpdk/spdk_pid78201 00:22:26.236 Removing: /var/run/dpdk/spdk_pid78203 00:22:26.236 Removing: /var/run/dpdk/spdk_pid80101 00:22:26.236 Removing: /var/run/dpdk/spdk_pid80167 00:22:26.236 Removing: /var/run/dpdk/spdk_pid80226 00:22:26.236 Removing: /var/run/dpdk/spdk_pid80288 00:22:26.236 Removing: /var/run/dpdk/spdk_pid80403 00:22:26.236 Removing: /var/run/dpdk/spdk_pid80469 00:22:26.236 Removing: /var/run/dpdk/spdk_pid80524 00:22:26.236 Removing: /var/run/dpdk/spdk_pid80584 00:22:26.236 Removing: /var/run/dpdk/spdk_pid80903 00:22:26.236 Removing: /var/run/dpdk/spdk_pid82055 00:22:26.236 Removing: /var/run/dpdk/spdk_pid82200 00:22:26.236 Removing: /var/run/dpdk/spdk_pid82439 00:22:26.236 Removing: /var/run/dpdk/spdk_pid82986 00:22:26.236 Removing: /var/run/dpdk/spdk_pid83145 00:22:26.236 Removing: /var/run/dpdk/spdk_pid83302 00:22:26.236 Removing: /var/run/dpdk/spdk_pid83399 00:22:26.236 Removing: /var/run/dpdk/spdk_pid83558 00:22:26.236 Removing: /var/run/dpdk/spdk_pid83668 00:22:26.236 Removing: /var/run/dpdk/spdk_pid84323 00:22:26.236 Removing: /var/run/dpdk/spdk_pid84353 00:22:26.236 Removing: /var/run/dpdk/spdk_pid84388 00:22:26.236 Removing: /var/run/dpdk/spdk_pid84646 00:22:26.236 Removing: /var/run/dpdk/spdk_pid84681 00:22:26.236 Removing: /var/run/dpdk/spdk_pid84711 00:22:26.236 Removing: /var/run/dpdk/spdk_pid85139 00:22:26.236 Removing: /var/run/dpdk/spdk_pid85156 00:22:26.236 Removing: /var/run/dpdk/spdk_pid85405 00:22:26.236 Removing: /var/run/dpdk/spdk_pid85532 00:22:26.236 Removing: /var/run/dpdk/spdk_pid85554 00:22:26.236 Clean 00:22:26.496 17:11:16 -- common/autotest_common.sh@1451 -- # return 0 00:22:26.496 17:11:16 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:22:26.496 17:11:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:26.497 17:11:16 -- common/autotest_common.sh@10 -- # set +x 00:22:26.497 17:11:16 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:22:26.497 17:11:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:26.497 17:11:16 -- common/autotest_common.sh@10 -- # set +x 00:22:26.497 17:11:16 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:26.497 17:11:16 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:26.497 17:11:16 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:26.497 17:11:16 -- spdk/autotest.sh@391 -- # hash lcov 00:22:26.497 17:11:16 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:26.497 17:11:16 -- spdk/autotest.sh@393 -- # hostname 00:22:26.497 17:11:16 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:26.756 geninfo: WARNING: invalid characters removed from testname! 00:22:53.296 17:11:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:54.670 17:11:44 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:57.199 17:11:47 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:00.521 17:11:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:03.822 17:11:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:08.008 17:11:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:09.936 17:12:00 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:10.195 17:12:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:10.195 17:12:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:10.195 17:12:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.195 17:12:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.195 17:12:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.195 17:12:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.195 17:12:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.195 17:12:00 -- paths/export.sh@5 -- $ export PATH 00:23:10.195 17:12:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.195 17:12:00 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:10.195 17:12:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:23:10.195 17:12:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721063520.XXXXXX 00:23:10.195 17:12:00 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721063520.rNQGJB 00:23:10.195 17:12:00 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:23:10.195 17:12:00 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:23:10.195 17:12:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:10.195 17:12:00 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:10.195 17:12:00 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:10.195 17:12:00 -- common/autobuild_common.sh@460 -- $ get_config_params 00:23:10.195 17:12:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:23:10.195 17:12:00 -- common/autotest_common.sh@10 -- $ set +x 00:23:10.195 17:12:00 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:23:10.195 17:12:00 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:23:10.195 17:12:00 -- pm/common@17 -- $ local monitor 00:23:10.195 17:12:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:10.195 17:12:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:10.195 17:12:00 -- pm/common@25 -- $ sleep 1 00:23:10.195 17:12:00 -- pm/common@21 -- $ date +%s 00:23:10.195 17:12:00 -- pm/common@21 -- $ date +%s 00:23:10.195 17:12:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721063520 00:23:10.196 17:12:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721063520 00:23:10.196 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721063520_collect-vmstat.pm.log 00:23:10.196 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721063520_collect-cpu-load.pm.log 00:23:11.176 17:12:01 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:23:11.176 17:12:01 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:23:11.176 17:12:01 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:23:11.176 17:12:01 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:11.176 17:12:01 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:11.176 17:12:01 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:11.176 17:12:01 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:11.176 17:12:01 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:11.176 17:12:01 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:11.176 17:12:01 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:11.176 17:12:01 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:11.176 17:12:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:11.176 17:12:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:11.176 17:12:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:11.176 17:12:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:11.176 17:12:01 -- pm/common@44 -- $ pid=87288 00:23:11.176 17:12:01 -- pm/common@50 -- $ kill -TERM 87288 00:23:11.176 17:12:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:11.176 17:12:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:11.176 17:12:01 -- pm/common@44 -- $ pid=87289 00:23:11.176 17:12:01 -- pm/common@50 -- $ kill -TERM 87289 00:23:11.176 + [[ -n 5102 ]] 00:23:11.176 + sudo kill 5102 00:23:11.187 [Pipeline] } 00:23:11.207 [Pipeline] // timeout 00:23:11.212 [Pipeline] } 00:23:11.232 [Pipeline] // stage 00:23:11.237 [Pipeline] } 00:23:11.249 [Pipeline] // catchError 00:23:11.257 [Pipeline] stage 00:23:11.260 [Pipeline] { (Stop VM) 00:23:11.272 [Pipeline] sh 00:23:11.550 + vagrant halt 00:23:15.767 ==> default: Halting domain... 00:23:21.039 [Pipeline] sh 00:23:21.317 + vagrant destroy -f 00:23:25.510 ==> default: Removing domain... 00:23:25.528 [Pipeline] sh 00:23:25.835 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:25.855 [Pipeline] } 00:23:25.870 [Pipeline] // stage 00:23:25.877 [Pipeline] } 00:23:25.893 [Pipeline] // dir 00:23:25.897 [Pipeline] } 00:23:25.914 [Pipeline] // wrap 00:23:25.919 [Pipeline] } 00:23:25.932 [Pipeline] // catchError 00:23:25.937 [Pipeline] stage 00:23:25.938 [Pipeline] { (Epilogue) 00:23:25.946 [Pipeline] sh 00:23:26.244 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:34.359 [Pipeline] catchError 00:23:34.361 [Pipeline] { 00:23:34.375 [Pipeline] sh 00:23:34.652 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:34.653 Artifacts sizes are good 00:23:34.660 [Pipeline] } 00:23:34.677 [Pipeline] // catchError 00:23:34.688 [Pipeline] archiveArtifacts 00:23:34.694 Archiving artifacts 00:23:34.884 [Pipeline] cleanWs 00:23:34.896 [WS-CLEANUP] Deleting project workspace... 00:23:34.896 [WS-CLEANUP] Deferred wipeout is used... 00:23:34.921 [WS-CLEANUP] done 00:23:34.923 [Pipeline] } 00:23:34.943 [Pipeline] // stage 00:23:34.949 [Pipeline] } 00:23:34.968 [Pipeline] // node 00:23:34.974 [Pipeline] End of Pipeline 00:23:35.017 Finished: SUCCESS